Note: I’m not trying to stir anything up. What I write about is from both thinking and doing. The scientific method involves challenging your hypotheses, and therefore I do. I am happy to engage in critical debate. I don’t hold my opinions sacred. If you have a question, just ask. If you have a suggestion, the same.
Outcome-Driven Innovation (ODI) has a very unique and rigorous structure in the way it defines customer performance metrics for Jobs-to-be-Done. In conjunction with a well-constructed survey which captures a complete set of all possible situations and/or complexities related to the job, as well as proper segmentation experiments, it provides very accurate problem-space targeting for innovation teams.
There is really no other method that I’m aware of that abstracts itself completely from the solution-space like ODI does. Even other versions of Jobs-to-be-Done study journeys, behaviors, and aspirations...far closer to traditional marketing research. You simply cannot argue its uniqueness, but many people have a vested interest in not understanding how or why it works.
The first thing traditional marketing research people see is the complexity. The next thing they notice is that we don’t interview people we want to be customers, we interview people who have tried to get a job done. So, we diverge in our thinking because we don’t study the solution-space (people who use or might use our product). Our focus is squarely on the problem, which is usually larger than any single solution. That largeness is what exposes potential value-creating opportunities to innovators.
I’ve repeatedly heard feedback that the method is far too rigorous and requires a degree in statistics, etc. The survey is too long. How do we know if we captured every possible complexity, situation, or context (MECE)? The rules are confusing and self-defeating. Outcome statements are hard to read so how can we rely on the interpretation of thousands of respondents? Executives don’t trust them because they don’t conform to their mental model. It’s hard to reconcile the research with prior work by the marketing team.
While I could argue with them all day long, I’ve had to ask myself if that’s the right path forward. We’ve got this powerful, yet complex jobs to be done model. The question I’ve been asking myself is whether we can simplify it, make it more consumable, more portable, and retain a meaningful advantage over models that favor aspiration over function, or the solution-space over the problem-space. The real problem-space, not problems related to consuming the services incorporated into your solution.
Working from the bottom-up, I’ve begun making some changes that I’m sure many of you have noticed. Perhaps the most obvious change is that I’m no longer using a direction on the front-end of a performance statement. There are so many practitioners out there using minimize, reduce, increase, maximize, etc. that it creates confusion. Limiting yourself to one word like minimize essentially means you don’t need anything; and I’ll try to explain in a moment.
Simplifying the statements makes them easier to understand and increases the likelihood of limited variability in the interpretation. I’m beginning to believe that we can ask respondents to rate these statements while considering things like time and likelihood. I’ve chosen to frame this as faster and more accurately and it applies equally to the step and the performance metric itself.
I’ve written about how this all fits together, as well as the importance of post-survey interviews to capture stories (and verbatims) from specific needs-based segments. This approach fills any gaps created up-front by overlooking circumstances and adding appropriate color and richness to mere data points.
I’ve also limited the number of verbs I use for performance metrics (not for steps). ODI points to a set of common verbs, but really relies on a handful of them. I’m essentially doing the same. The difference is that I am moving away from task-based action- verbs and getting to the ultimate outcome (at least I’m trying to).
If you apply something like the 5-Why’s to a traditional ODI outcome statement you will ultimately get to a response like “Because I want to know that I haven’t made a mistake.” To me, this is the ultimate outcome, and you will see me use this verb a lot.
“How important is it for you to know you haven’t made a mistake?”
“How difficult is it for you to know you haven’t made a mistake?”
If this metric turns out to be one of the drivers for a segment, the logical approach to ensure a complete understanding of the segment is to return to the segment’s respondents and ask them to elaborate with real stories. Having that conversation around simple outcome-based statements is key. Not having that conversation is potentially a flaw in the approach.
What do we get out of this?
I listed a number of objections above that I believe get resolved with a simpler approach
The survey is too long: removing the metric portion of the statement reduces the number of statements necessary for the model
Capture all situations: we don’t rely on the perfect model since we will capture more specific complexities, situations and circumstances during post survey interviews. This simply has to be done, there is no perfect model, but these data points cannot be capture 100% through front-end interviews since we don’t know what segment the interviewee would land in
Confusing rules: My rules are simple. Use 5-Why’s to get a simple statement that starts with an outcome-based verb versus a task-based verb
Hard to read: These statements are much easier to read and analyze. We’re not asking people to learn a new language, but we’re still putting together a very structured model; one that takes integration into consideration
Executives will reject them: these people don’t have a lot of time and they typically have a lot of experience that conflicts with this approach and language. Telling them your success rate doesn’t work. Just use language they understand and move on
Won’t reconcile with prior research: JTBD will never 100% reconcile, but the last thing you want to do is push language on professionals who have a fundamentally different view of VoC. It’s easier to find common ground if you create the appearance of moving closer to them. You can’t change them, so you have to change yourself.
I hope all of this makes sense. If not, please feel free to share your thoughts. I’m sure there are a lot of experienced researchers out there that can provide some valuable inputs to this methodology, and the small changes I’m making to it.
Makes a lot of sense and addresses a real pain point folks have working with outcomes.
Nice article, Mike. I also recommend using 5-whys to get to a simpler, higher-level JTBD. Another thing I've found useful in combating some of the resistance you talk about, is visualising how far serving a JTBD helps to achieve a specific business objective. I'll link to your post from our knowledge base, if that's ok with you. Do let me know if you'd rather I didn't.