For as long as we can remember, innovation has been a top priority — and a top frustration — for leaders. In a recent McKinsey poll, 84% of global executives reported that innovation was extremely important to their growth strategies, but a staggering 94% were dissatisfied with their organizations’ innovation performance. Most people would agree that the vast majority of innovations fall far short of ambitions.
On paper, this makes no sense. Never have businesses known more about their customers. Thanks to the big data revolution, companies now can collect an enormous variety and volume of customer information, at unprecedented speed, and perform sophisticated analyses of it. Many firms have established structured, disciplined innovation processes and brought in highly skilled talent to run them. Most firms carefully calculate and mitigate innovations’ risks. From the outside, it looks as if companies have mastered a precise, scientific process. But for most of them, innovation is still painfully hit-or-miss.
What has gone so wrong?
The fundamental problem is, most of the masses of customer data companies create is structured to show correlations: This customer looks like that one, or 68% of customers say they prefer version A to version B. While it’s exciting to find patterns in the numbers, they don’t mean that one thing actually caused another. And though it’s no surprise that correlation isn’t causality, we suspect that most managers have grown comfortable basing decisions on correlations.
Why is this misguided? Consider the case of one of this article’s coauthors, Clayton Christensen. He’s 64 years old. He’s six feet eight inches tall. His shoe size is 16. He and his wife have sent all their children off to college. He drives a Honda minivan to work. He has a lot of characteristics, but none of them has caused him to go out and buy the New York Times. His reasons for buying the paper are much more specific. He might buy it because he needs something to read on a plane or because he’s a basketball fan and it’s March Madness time. Marketers who collect demographic or psychographic information about him — and look for correlations with other buyer segments — are not going to capture those reasons.
After decades of watching great companies fail, we’ve come to the conclusion that the focus on correlation — and on knowing more and more about customers — is taking firms in the wrong direction. What they really need to home in on is the progress that the customer is trying to make in a given circumstance — what the customer hopes to accomplish. This is what we’ve come to call the job to be done.
We all have many jobs to be done in our lives. Some are little (pass the time while waiting in line); some are big (find a more fulfilling career). Some surface unpredictably (dress for an out-of-town business meeting after the airline lost my suitcase); some regularly (pack a healthful lunch for my daughter to take to school). When we buy a product, we essentially “hire” it to help us do a job. If it does the job well, the next time we’re confronted with the same job, we tend to hire that product again. And if it does a crummy job, we “fire” it and look for an alternative. (We’re using the word “product” here as shorthand for any solution that companies can sell; of course, the full set of “candidates” we consider hiring can often go well beyond just offerings from companies.)
This insight emerged over the past two decades in a course taught by Clay at Harvard Business School. (See “Marketing Malpractice,” HBR, December 2005.) The theory of jobs to be done was developed in part as a complement to the theory of disruptive innovation — which at its core is about competitive responses to innovation: It explains and predicts the behavior of companies in danger of being disrupted and helps them understand which new entrants pose the greatest threats.