Lean Analytics book review

Hmm.

Lean* is on a roll. I read lean analytics on a plane last week and was able to get through about half of it and I liked it. I’ve recommended it to product owners on my team so that they can get a quick summary of the best practices for analytics. They have laid out the book as an “Analytics for Dummies” book.

I think the first half of the book is useful to get grounded but here are the things I did not like:

  • Focuses only on web analytics
  • Does not really discuss measuring feature use even for a web app

Here are the things I liked

What to measure

You can track many metrics, its important to collect as few metrics as you can and only the ones that you can control. For example – many product managers are not in charge of pricing or marketing or the GTM. So… should you measure customer acquisition metrics? 

Picking the right metric is more important than almost anything else.

Trust your gut

Eventually product mangers and entrepreneurs get paid for their judgement and not just for doing what people are telling them to do today. This is the only skill to hone and then measure results and course correct.

 

Analytics for Desktop and SaaS Software Products that want to be lean

I’ve been thinking a lot about what are important things to measure for a desktop software products and SaaS projects and here are my thoughts:

You can gather operational metrics and leading indicators that can impact your decisions as a product team. I think there is value in collecting both. Its important to think about what are the key metrics you want to focus on and if there are any key leading indicators that can predict changes in these metrics. For example: Reduce active use is a leading indicator for reduced revenue in the future. Lack of customer acquisition is a leading indicator of reduced trial downloads and eventually trial conversions, etc.

Generally it helps to have uninterrupted time  and a piece of paper to jot down what’s important to your business. For example: Is your goal to grow revenue? If yes, here are the hypothesis that you might want to test:

  • Marketing is not effective – not enough new customers coming in?
  • Customers are coming in but existing customers are leaving faster?

So, I’ve tried to categorise metrics that I try to gather for any software project I work on:

Customer acquisition Funnel
  • This is different for every channel through which you sell your product. The Direct Channel is simplest to understand and the metrics are listed below
  • Visits to the landing page – say “myproduct.com”
  • Landing page to trial download or “sign up” conversion
  • Trial to full product conversion
These to me are the most important metrics that  drive business goals and metrics. Everything else is making sure you provide sustained value to keep people once you’ve got them. Keeping existing customers is always easier than getting new ones. If these numbers are healthy and the business is tanking then you start focussing on customer retention.
Customer Retention Metrics
  • Active use: Usage by version (number of app launches by version per day)
  • Active use: Files saved in each app launch
  • Cohort analysis for subscribers
    • I’m looking for a simple spreadsheet which looks like this
    • | customerID (or GUID) | Subscription date | Subscription Type | Cancellation Date |
    • Such a spreadsheet can also answer my Customer acquisition requirements.

Customer Profile Metrics

  • Platform – Mac/Win/iOS
  • A Questionnaire during app install that can ask the user to share more about themselves

Feature usage metrics

    • Most used features – be careful about when this event is logged and how its logged 
    • Are new features being used, and how will you agree that a feature is being used a lot?
      • Look at new feature usage/session or /day post release
    • Is the addition of new features impacting customer retention or customer acquisition metrics?
      • This is really important – especially for established products. You may find that nothing you do impacts the business metrics. If you do then why develop features. Work on a new product, service or something else.

While any experienced developer can instrument your app so that you can log usage data on your servers its important to get the user to opt into usage tracking. Here are some companies that make it easy to instrument app usage and help startups get a better understanding of their customer

  • http://www.deskmetrics.com
  • http://www.trackerbird.com/
  • Omniture from Adobe.com
  • Google Analytics since its free
  • You can also try to implement a custom logging solution built by your engineering team

 

More when I get time.

Great day as a Product Manager

As a product manager based in India, its great when your product gets centre stage at a worldwide conference in the US. This happened to me last week and it made all the struggles over the last 8 months worth it. But the real heroes are the engineers on the product team. They put in the real work to realise, shape and rationalise management’s vision. It was gratifying to send pictures and emails to the team letting them know that their work mattered and is being showcased at the highest level.

It’s only after shipping that the real analytical work begins as we try to answer the following questions:

  1. Are people using the new features?
  2. If yes, how much are they using it?
  3. What amount of use makes an investment in a feature worthwhile?
  4. Are these features bringing new users to the product? – If this was a goal for the release.
  5. Are the new features usable?

I try to get question #5 answered via prerelease testing but all the other questions are truly answered only after the product reaches the customers hands. I know that #4 was not a goal for this release.

Q3 is always tricky. Sometimes you invest way more resources into a feature that you anticipated at the beginning of the cycle. This is just the engineering truth. You discover usability issues late or the engineering team discovers workflow issues that we did not think of before hand.

Sometimes you find a single bug or a performance issue that prevents you from shipping a feature that you worked on for 4-8 weeks. This is still not bad considering that only a few releases ago we would have wasted many more months of work since we were were following a waterfall approach. Every time I run into such situations I feel proud that we develop software incrementally and send it out to a set of users for testing every month. This validates our assumptions earlier and allows us to discover issues earlier. And, that is worth the pain the process changes bring with them.