CHAPTER 52 Demand Testing Techniques

One of the biggest possible wastes of time and effort, and the reason for countless failed startups, is when a team designs and builds a product—testing usability, testing reliability, testing performance, and doing everything they think they're supposed to do—yet, when they finally release the product, they find that people won't buy it.

Even worse, it's not like they sign up for a trial in significant numbers, but then for some reason don't decide to buy. We can usually recover from that. It's that they don't even want to sign up for the trial. That's a tremendous and often fatal problem.

You might experiment with pricing, positioning, and marketing, but you eventually conclude that this is just not a problem people are concerned enough about.

The worst part of this scenario is that, in my experience, it's so easily avoided.

The problem I just described can happen at the product level, such as an all‐new product from a startup, or at the feature level. The feature example is depressingly common. Every day, new features get deployed that don't get used. And, this case is even easier to prevent.


One of the biggest possible wastes of time and effort, and the reason for countless failed startups, is when a team designs and builds a product, yet, when they finally release the product, they find that people won't buy it.


Suppose you were contemplating a new feature, perhaps because a large customer is asking for it or maybe because you saw that a competitor has the feature or maybe it's your CEO's pet feature. You talk about the feature with your team, and your engineers point out to you that the implementation cost is substantial. Not impossible but not easy either—enough that you don't want to take the time to build this only to find out later it wasn't used.

The demand‐testing technique is called a fake door demand test. The idea is that we put the button or menu item into the user experience exactly where we believe it should be. But, when the user clicks that button, rather than taking the user to the new feature, it instead takes the user to a special page that explains that you are studying the possibility of adding this new feature, and you are seeking customers to talk to about this. The page also provides a way for the user to volunteer (by providing their e‐mail or phone number, for example).

What's critical for this to be effective is that the users not have any visible indication that this is a test until after they click that button. The benefit is that we can quickly collect some very helpful data that will allow us to compare the click‐through rate on this button with our expectations or with other features. And then we can follow up with customers to get a better understanding of what they would expect.

The same basic concept applies to entire products. Rather than a button on a page, we set up the landing page for the new offering's product funnel. This is called a landing page demand test. We describe that new offering exactly as we would if we were really launching the service. The difference is that if the user clicks the call to action, rather than signing up for the trial (or whatever the action might be), the user sees a page that explains that you are studying the possibility of adding this new offering, and you'd like to talk with them about that new offering if they're willing.

With both forms of demand testing, we can show the test to every user (in the case of an early startup) or we can show it to just a very small percentage of users or within in a specific geography (in the case of a larger company).

Hopefully, you can see that this is very easy to do, and you can quickly collect two very useful things: (1) some good evidence on demand and (2) a list of users who are very ready and willing to talk with you about this specific new capability.

In practice, the demand is usually not the problem. People do sign up for our trial. The problem is that they try out our product and they don't get excited about it—at least not excited enough to switch from what they currently use. And dealing with that is the purpose of the qualitative and quantitative techniques in the chapters that follow.


Discovery Testing in Risk‐Averse Companies

Much has been written about how to do product discovery in startups—by me and by many others. There are many challenges for startups, but most important is survival.

One of the real advantages to startups from a product point of view is that there's no legacy to drag along, no revenue to preserve, and no reputation to safeguard. This allows us to move very quickly and take significant risks without much downside.

However, once your product develops to the point that it can sustain a viable business (congratulations!), you now have something to lose, and it's not surprising that some of the dynamics of product discovery need to change. My goal here is to highlight these differences and to describe how the techniques are modified in larger, enterprise companies.

Others have also been writing about how to apply these techniques in enterprises, but on the whole, I have not been particularly impressed with the advice I've seen. Too often, the suggestion is to carve out a protected team and provide them some air cover so they can go off and innovate. First of all, what does this say about the people not on these special innovation teams? What does this say about the company's existing products? And, even when something does get some traction, how well do you think the existing product teams will accept this learning? These are some of the reasons I'm not an advocate of so‐called corporate innovation labs.


The most important point for technology companies: If you stop innovating, you will die.


I have long argued that the techniques of product discovery and rapid test and learn absolutely apply to large enterprise companies, and not just to startups. The best product companies—including Apple, Amazon, Google, Facebook, and Netflix—are great examples where this kind of innovation is institutionalized. In these companies, innovation is not something that just a few people get permission to pursue. It is the responsibility of all product teams.

But before I go any further, I want to emphasize the most important point for technology companies: If you stop innovating, you will die. Maybe not immediately, but if all you do is optimize your existing solutions, and you stop innovating, it is only a matter of time before you are someone else's lunch.


That said, we need to do this in a responsible way.


I believe it's a non‐negotiable that we simply must continue to move our products forward, and deliver increased value to our customers.

That said, we need to do this in a responsible way. This really means doing two really big things—protect your revenue and brand, and protect your employees and customers.

Protect Revenue and Brand

The company has built a reputation and has earned revenue, and it is the job of the product teams to do product discovery in ways that protect this reputation and this revenue. We've got more techniques than ever to do this, including many techniques for creating very low‐cost and low‐risk prototypes, and for proving things work with minimal investment and limited exposure. We love live‐data prototypes and A/B testing frameworks.

Many things do not pose a risk to brand or revenue, but for the things that do, we utilize techniques to mitigate this risk. Most of the time an A/B test with 1 percent or less of the customers exposed is fine for this.

Sometimes, however, we need to be even more conservative. In such cases, we'll do an invite‐only live‐data test, or we'll utilize our customer discovery program customers that are under NDA. There are any number of other techniques in the same spirit of test and learn in a responsible way.

Protect Employees and Customers

In addition to protecting revenue and brand, we also need to protect our employees and our customers. If our customer service, professional services, or sales staff are blindsided by constant change, it makes it very hard for them to do their jobs and take good care of customers.

Similarly, customers that feel like your product is a moving target that they have to constantly relearn won't be happy customers for long.

This is why we use gentle deployment techniques, including assessing customer impact. Although this may seem counterintuitive, continuous deployment is a very powerful gentle deployment technique, and when used properly along with customer impact assessment, it is a powerful tool for protecting our customers.

Again, most experiments and changes are non‐issues, but it is our responsibility to be proactive with customers and employees and sensitive to change.

Don't get me wrong. I am not arguing that innovating in enterprise companies is easy—it's not. But it's not because product discovery techniques are the obstacles to innovation. They are absolutely critical to consistently delivering increased value to customers. There are broader issues in large enterprise companies that typically create obstacles to innovation.

If you are at a larger, enterprise company, know that you absolutely must move aggressively to continuously improve your product, well beyond small optimizations. But you also must do this product work in ways that protect brand and revenue, and protect your employees and your customers.