Diving into engagement metrics for B2B SaaS products.
When product managers say we want to increase engagement, what do we really mean?
In my articles on “Important B2B SaaS metrics”, I ended with a point that for product managers, there’s often confusion when we talk about the term “product metrics”. I guarantee you that if you ask around, different product managers will tell you a different product metric that matters.
One reason is that there is no single metric that does it all. What a product manager cares about will vary based on the stage of the company, the stage of the product, the industry, and the product manager’s responsibilities. Today, I’m going to dive into engagement metrics for B2B SaaS products to make this point.
In July, my friend Jeff sent me this article, “Up and Down the Ladder of Abstraction”. The core concept is about moving from up and down on levels of thought.
The model describes varying levels of abstraction (up) and concreteness (down) and helps describe our language and thoughts.
The higher up the ladder you are the more abstract the idea, language or thought is. The lower you are on the ladder the more concrete the idea, language or thought is.
I thought of the article being applicable to this discussion because we’re going to be moving down, from the high-level, business metrics down to engagement metrics.
Starting at the highest level
Previously, when I wrote about B2B SaaS metrics, I started at the highest level of metrics, the financial metrics that are often used to discuss and measure the overall state of the business. These metrics (Cost Acquisition Costs, Average Revenue per Customer, Cost of Service, Customer Retention Rate, and Revenue Retention Rate) are usually reported in terms of dollars. These are also the metrics that every business works to improve by lower acquisition cost, increasing revenues from customers, lowering the cost to service, and improving customer and revenue retention rates. Regardless of other objectives a business has, these four are core because failure to maintain an equilibrium in these will result in the business shutting down.
Diving down to engagement metrics when starting high
Here’s a scenario. As a product manager, you’re given the objective to increase revenue for a B2B SaaS product. What do you do? Well, you start by decomposing and walking down the ladder.
Decompose inputs that would cause revenue to increase. Ask yourself, what would cause revenue to increase?
What if you had more paid customers (i.e., increase # of customers)?
What if you had a better customer retention rate (i.e., increase customer retention)?
What if customers spent more (i.e., increase APRC)?
Pick an input to focus using benefit/cost ratio. As you can see from above, there are different ways to increase revenue, which means different solutions, which means different projects. If you decided to focus on “more paid customers”, you could probably imagine starting projects that focus on getting more sales leads, qualifying more sales leads, improving sales conversing, shortening the sales cycle, etc.
What matters here is picking the input given the projected cost/benefit. What I mean by cost/benefit is the relative effort it would take to improve something versus the amount of improvement you’d reasonably expect. For example, if you already have a lot of paid customers and you have a good sales pipeline, but paid customers are canceling after 10 days, you may get more benefit by focusing on increasing customer retention.
For the rest of this article, I’m going to focus on customer retention because it allows me to connect to engagement metrics.
Brainstorm and decompose inputs that would increase customer retention.
We’ve now walked further down the ladder and finally gotten to engagement. Why do we call it engagement? The logic is simple and it started with measuring usage or time a user spent using the software. If a user spends a lot of time using the software, the unstated assumption is the user must be using the software and getting value, likely to continue using the software, and remain as a paying customer.
But as you can guess, time spent doesn’t mean quality time. The user could be spending a lot of time using software because:there are bugs preventing task completion
there are a lot of steps required to complete the task
the user is multi-tasking
the user is learning and unfamiliar with the software
the time is spent navigating around the software
So, we moved from pure usage to the term, engagement, which tries to measure quality time spent. We want engaged customers, not just someone spending every waking moment using the software.
The most common engagement metric in B2B SaaS is the percentage of active users per customer that my previous article discussed. The key task is defining “active” to ensure it is measuring quality time spent. For example, if you define active as “having logged in”, then you’re essentially saying logging in is “quality time spent.” Secondly, it’s most important to measure this percentage for users of each customer. Recall that in B2B SaaS, I define the term “user” as an individual and “customer” as a business. You want to measure this for all the individuals in a business and not just create an aggregate of all users because the blended percentage isn’t actionable. In our walk down the ladder, we want to identify customers who have a low percentage of active users, which is correlated with customer churn. This allows us to try to understand why those specific users aren’t “active” and devise projects to make them “active”.
In addition to the metric “percentage of active users per customer”, the other two common metrics are task completion success rate (sometimes referred to as feature usage or feature adoption rate) and task completion cycle rate.
Task completion success rate: The percentage of time a user starts a task and completes it. The purpose of this is to track different features that are associated with different tasks. As a SaaS product adds new features, it’ll support more tasks. They can’t all be measured as part of the “active user” metric. With this, you can break that down to track key tasks. An example. If 100 tasks are started to submit an invoice, but only 80 are completed, then the task completion success rate is 80% (80/100). Note how the task completion success rate is user agnostic, but task specific.
Task completion cycle rate: The percentage of users who completed a task within the expected task cycle time. This is to measure performance. If 80 tasks were completed in “submit invoice task” and the baseline is it takes 5 minutes to complete a task, but only 40 of the 80 completed the task in 5 minutes, then the rate is 50% (40/80). You are literally measuring the time it takes to complete tasks.
If you’re advanced, you can combine the two metrics into an acceptable task complete rate, which is just saying that in total 40 out of 100 tasks were completed successfully and within the expected cycle time of 5 minutes. But these combined metrics are better for reporting and you’ll still want the drill-down metrics for taking actions.
A closing word on NPS, CSAT, or Happiness metrics as part of engagement metrics.
While I was researching this article, I’ve often seen people talk about various metrics like NPS, CSAT, or Happiness as another component of engagement metrics. The unstated assumption is that happy, satisfied users who are willing to refer others to the product must be engaged. I’ll admit that I used to think this way too and have done my share of implementing surveys and software to automatically capture and calculate NPS.
However, I no longer believe it’s the right metric for product managers to track in measuring engagement for two reasons.
NPS doesn’t really tell you anything actionable. As we’ve walked down the ladder to engagement metrics, the goal is to be more concrete by using metrics to guide our actions. Identifying users who aren’t “active” allows us to create projects to guide those users to become active. Identifying tasks that never get completed or long cycle times allows us to drill down on projects to reduce cycle time or reduce task abandonment. However, NPS gives no actionable guidance because anything could be causing a low or high NPS.
NPS is self-reported and not accurate. We know that people are bad at predicting their future behavior, but the standard NPS question, “How likely is it that you would recommend [company X] to a friend or colleague?” is asking exactly this. It doesn’t ask for past behavior (i.e., have you recommended), but asks out to predict future behavior (i.e., you would). Compared to the metrics I’ve discussed above which are based on actual past behavior, I’d trust the past behavior to be a better indicator of future behavior. There is a reason product managers are taught not to trust blindly what a user says, but observe what a user does. Why do people react differently when it’s about NPS?
Note: Thanks to all my readers. I’ve been publishing consistently weekly since September of 2020. As a writer, it’s time for me to take a writing break so I can finish working with my copy editor on my book. But if you have topic ideas for the future, reply back to this email or leave a comment below. Cheers!
Additional Reading
Good and Bad Market Research: A Critical Review of Net Promoter Score
Questions about the ultimate question: conceptual considerations in evaluating Reichheld's net promoter score (NPS)
The Fallacy of the Net Promoter Score: Customer Loyalty Predictive Model
If you’re going to measure one metric, measure this: Introducing the product engagement score