I've been teaching AppCademy teams about metrics several times since 2013. This is the latest slide deck.
Main goal: dispel convenient default metrics, instead focus on your own business problems and derive metrics to solve them.
AppCademy is a 4-week accelerator camp run by AppCampus, a training program for Windows Phone dev teams.
2. What are metrics ?
Metrics are the eyes of the business
Eyes are for seeing where youre stepping and where you want to go
Metrics are not for looking cool on the lobby screen
3. Which metrics should I follow ?
You dont pick metrics, you pick business problems
Visible change in a metric visible change in the business
Business problems change and evolve
Seeing problems is not enough
Metrics should point out the root cause and hint at the solution
4. Example: New subscription-based app
Most effective user acquisition channel ?
Most efficient organic growth mechanism ?
How to fix onboarding ?
What features are unused ?
Should we make a special offer after 2 or 5 days ?
5. Example: Older IAP-based app
Where are under-penetrated segments remaining ?
What makes users leave ?
What type of content drives monetization ?
Is there content saturation ?
7. User acquisition: example metrics
New users
Active users
Magnet features
Acquisition cost, per channel, country, user revenue, etc.
Channel traffic quality (this is tricky)
8. Engagement: example metrics
Back in X days after first use
Session length and its relation to revenue/retention
Feature coverage and popularity
Funnels, onboarding effectiveness
9. Retention: example metrics
Its way cheaper to keep a user than to find a new one
Active after X days since first use
Time between visits
Weekly churn
Core features, what keeps users coming back?
10. Monetization: example metrics
Most freemium apps get a 2 % monetization rate
Monetizing features, what kind to introduce next?
Content saturation, i.e., spending walls
Promotion success, which hooks work?
Time of first monetization
12. Treat users as somewhat individual
Analysis and optimization across the whole userbase is not worth it
Analysis and optimization of individual users is not worth it
Find criteria that produce noticeable differences between groups
This may vary from metric to metric
13. Subgroup examples
Impact of app localization varies wildly between countries
Users who installed during a weekend can be converted more
aggressively
Users with an animal avatar react great to this promotion
Launching the new version made user count go up, but
conversion rates suffered
Feature X is very popular in average, but very little among
paying users
14. Practical issues with metrics
Data quality is absolutely horrible in many cases
Special doom pits: timestamps, IDs
The product and the users change => data changes
Long term aggregates go wrong
Metrics lose their meaning
15. Statistical significance
Humans are by nature horrible at interpreting statistics
Things get even worse when lots of data and no clear goal
You are not an exception
Guidelines
Be wary of any signals other than the painfully obvious ones
Always verify
Even service providers screw up multiple hypothesis testing
16. Service providers vs. DIY
Collecting and analyzing is expensive to a small team =>
stay with service providers until you cant
Decent services: GameAnalytics, Omniata, MixPanel, KissMetrics
Collect as much as you can, the use cases will emerge
Your data is almost certainly tiny => dont overdo the tools
Getting data collection right MUCH more difficult than you expect
Getting the numbers right is MUCH more difficult than you expect
18. Things are not normal
School teaches you that everything is a Gaussian
Thats just not true
Most things follow a power law, not a normal distribution
People dont act the way you think
19. This is what most revenue/engagement/whatever metrics look like
Next, remove the non-paying users
20. But the result will not be like this normal distribution
21. This is the actual form
The numbers are highly concentrated and go pretty high
24. Power law
Follows from principle: Whoever has will be given more
Example: Web pages get links in proportion to their popularity
=> virtuous cycle
Characterized by 1) huge whales 2) huge mass at the bottom
25. Implications of power laws
Averages are worse than useless
Your userbase has very diverse subsets, treat them that way
More users means more users in the future
(App store Featured actually works)
=> Only two relevant factors: new users and especially retention
Network effects are very powerful