Arguing about data instead of arguing about ideas
Using data from experiment to validate ideas instead of arguing about the idea
As a writer on Substack, I’m seeing all these new Substack features trying to increase reader engagement and revenue (e.g., chat, podcast, paid subscriber, pledge). I’m curious what readers think of these features. Is it overkill? Do you want more engagement? As for support, you can support me by purchasing by book. Each book sold, after costs, I pocket $10. Don’t care for my book? You can buy me a coffee via Venmo @Shaw-Li
Recently, a reader emailed me asking about “validation” tips. When we build new products or features, how do we know if we’re going to be successful? This got me recalling an argument I once had with a copy designer. It was the third round of edits and I had stopped caring whether our copy adhered to the Oxford comma or not. Should I have cared?
Intuition/Experience vs. Data/Polls
There are two general approaches to validating a product or feature. On one extreme, are people who use their intuition or experiences to design the product. It’s validated when the product is designed to their specification. On the other extreme are people who have a hypothesis they are seeking to invalidate with data. Notice that I used the word “invalidate” rather than validate.
Most of us, don’t fall into only one camp. For some product or feature ideas, you might have a strong feeling on what’s “optimal”. Sometimes, ideas pop into our heads and we just “feel” it’s the right idea. Once, I was certain that if we gave out blankets, it’s be the best branded swap gift for a conference. Thank god we never went with this idea. Yet even now, as I recall this event, I still feel blankets would have been great. Why do I feel so strongly about blankets? What data do I have? → None. It just felt right.
For many product or feature ideas, the rationale for ours decisions aren’t necessary rational. Or perhaps they are rational, but not what you can easily articulate (yet) or you want to share outloud. Maybe I just wanted a free blanket for myself. I can’t recall. However, what is the cost if my intuition is wrong?
The problem with relying on our intuition is that we can be blinded by our egos, ambitions, and dreams. Maybe your intuition is accurate the first time, 2nd, 3rd, 4th, even 5th time. But every single time?
This isn’t to say you should stop dreaming. Instead, we need a check-and-balance against our dreams. But how to we pick the right validation steps? Is there an one-size fits all answer when it comes to testing product or feature ideas? What can we learn from how drugs are developed?
Drug develop is a good analogy. In the phases of drug development, progressively more expensive validation methods are used to collect data that increases our certainty in invalidating the null hypothesis. In some cases, we’ve invalidated the null hypothesis (i.e., this drug is effective), but also found out it has some terrible side effects (i.e., the drug is also extremely harmful to humans).
Developing validation principles
There isn’t a single set of “validation” tests I can provide (in this article). Instead, I’m going to explain three principles you should develop.
List your idea as a hypothesis.
Whether it’s a product or feature, it’s best to formulate the solution to a problem in the form of a hypothesis statement. By formulating in a hypothesis statement, you are forced to not only explain your idea in more detail, but you also explicity state it as a hypothesis, which you’re seeking data to validate. It remains a claim until is it validated.
A hypothesis statement templateI believe that [this product/feature] will [provide this benefit]
for [these customers/users].
I know this is true when [I see this outcome].Example
I believe that displaying the first two search results from Google when our chatbot doesn’t understand the user’s question will provide value to our users.
I know this is true when I see an increase in MAU for users who click on one of the Google search results versus users who are told that we didn’t understand their question.
Think about the cost of validation relative to the idea’s stage
In the drug analogy, phram companies use different techniques that are progressive more expensive (in time, people and money) as the drug progresses. With your product and feature, you should think in similar terms. This means, it’s not practical to design a single validation experiment to validate multiple objectives. It also means it’s not practical to valid a product with one project. Some examples of different objects include:
Value to customer at any price point
Value at specific price point
Feasible not at scale
Feasible at large scale
To illustrate this principle, think about this article you are reading. If you’re a subscriber, it might be valuable (hopefully). But if I gated this article behind a $5/month subscription fee, would you still have found it valuable enough?
If you were designing a newsletter product and wanted to determine if it’s valuable at any any price point versus a specific price point, can you think how you might design the experiment differently?Level of accuracy to move forward / What’s the cost if we move forward based on inaccurate data?
We want to collect data to invalidate our hypothesis, but data collection is expensive (time, money, people). We need to define ahead what’s the level of accuracy we need. For example, some people argue it’s a bad idea to pitch products or ideas to friends and family. They’ll lie to protect you.
Well, if all your friends and family lie to you, you might need to reconsider some of your friends and family relationships. The truth is, it depends on the level of accuracy you need to make your decision quit your day job and the cost of a wrong decision.For example, you ask friends for local restaurant recommendations. What’s the cost if you ate at one of the recommendations and the food was bad or you got food posioning? Consider the same question, but now you’re pick a restaurant for the one dinner you’ll have in your dream trip to Vietnam. What’s the cost if the food was bad or you got food posioning on vacation?
A validation mismatch occurs either when we overinvest in resources to obtain data/accuracy not necessary for the decision/stage or we underinvest. That’s what, you should state clearly what data you need to see to move forward. To continue my example above, I might have three different validation experiements with different levels of data needs.
Adhoc user validation: Show mockup of the Google search results to 10 people in the office; have at least 2 people show visible interest in the feature
Non-scalable experiment: Work with 1 live chat agent and present Google search results for questions we don’t have approved content to 25 users; have the agent mark at least 5 conversations where they believe this benefited the customer
A/B test: Build the feature, release to control and test group to conduct quantiative A/B testing
By thinking about the decision points and what’s necessary for you to move forward, you can work backwards to determine what validation experiement is needed / how to collect this data.
These are my thoughts when it comes to validating product or feature ideas. Have a principle you think of adding or disagree?