You've been using verbatims all wrong in Jobs-to-be-Done. Yes, You!
Jobs-to-be-Done research requires great story-telling, or it will simply collect dust on the shelf
In traditional Jobs-to-be-Done research, the only time we actually talk to a customer is during qualitative interviews or focus groups. In such a study, if you’re going to collect verbatims - direct quotes from customers - this is the only time you can do it. And this is exactly why they are basically worthless. Disagree? Read on…
As a part of the qualitative process, experienced jobs to be done practitioners build a value model which provides scope to the problem we are studying. It has definitive boundaries and each component of the model (steps) has a handful of performance metrics (needs) that end-users evaluate within the context of the component (the step).
But, the model is not being rated while we develop it. 👈
When we put the model into a survey we are able to capture prioritization ratings which group the respondents into needs-based segments; with each segment of end-users struggling differently than other segments while getting the same job done. A customer struggle is going to be defined by more than a single metric, which is important to remember because improving a single metric is unlikely to shift a market to a new solution.
Something else that we do is to identify as many potential situations that an end user may find themselves in as possible so we can rate them as well. After we cluster the performance metrics statistically, we attempt to correlate these situations to each cluster (or segment). The goal is to explain what situations are causing the segment to struggle in its unique way.
We always want to evaluate a complete market, not a single situation; which is where we depart significantly from those who have followed Clay Christensen. Christensen believed that this analysis should begin with a circumstance, but there is no guarantee that a circumstance (or situation) addresses the entire market or if it’s even under- or over-served. Additionally, an end-user might find themselves in different segments (the way we define them) depending on the circumstance they are in when getting the job done. While this may work for marketing, it will not work for innovation.
I’ll refer your mind to the job of listening to music, where you may be listening in your home, your vehicle, or while out on a job. You can still study these individually, but be aware that these are not a complete view of the market and they are subjectively selected, and not driven by data
Since the end users that we interview (on the front-end) to develop the model do not rate the model themselves, there is no way to accurately assign any verbatim they provide to a segment, and therefore they should not be contributors to the story you ultimately develop. Here’s why:
If the verbatim relates to a single metric, it is not describing a segment. Segments will generally have numerous low performing metrics and be associated with one or more situations
There is no way to assign the person that provided the verbatim to a segment. Therefore, the verbatim may describe a different struggle in a different segment. We just can’t know
A single end-user is not a market, and they don’t represent a struggle in the market just by making a statement with no other data to provide context
We have no overarching collection of verbatims from segment-members that address low performing metrics (because they have not yet been rated)
Basically, we’re just praying that no one asks for verbatims. But more importantly, our innovation story is going to be very weak if all we have are functional metrics that are presented as a foreign language to senior decision-makers. At the end of the day, you have to tell a compelling story, and to do so requires that you know your audience and tell the story in the language they want / need to hear. It doesn’t work the other way around. Not everyone is an engineer.
If you’re audience doesn’t know your language, or if you overload their working memory with data they are not programmed to consume, your story will be forgotten very quickly. There is plenty of research to back this up (this is a great book on story-telling). Putting methodology training on the front-end of the story to force them into your mind-set just doesn’t work.
Let’s be honest, there is no 100% certainty that we’ll get all of the situations right, or complete. But, collecting what we can uncover up-front for inclusion in the model is far more accurate than other methods. Can it be better, though?
I was talking to one of my former colleagues a few weeks ago and this subject came up. How do you ensure that compelling research like this doesn’t end up on the shelf? It often does, which means the opportunities uncovered never get implemented. Here’s my quick-take on it…
If you are ultimately going to take a new product to market, you need to be able to tag prospects to their respective innovation segments quickly, as prospects are identified. This is not the same sort of segmentation we commonly see from our marketing friends. But they are the ones who need to reach the audience. Not everyone in an innovation segment is 35-54 years old, or lives in the 44022 zip code
Since we need to figure this out down the road for marketing purposes, we actually need to figure it out now. Figuring this out is another topic I’ll try to address later
If we figure it out now, we can recruit end users who we identify as being in a segment and get them to talk about why they struggle with the metrics that we already know they struggle with (because we have already run a survey and segmented the market).
These verbatims can tie back to, and elaborate, the job story (see my post on this)
and also collectively provide the emotion that your senior decision-makers want to hear before they nod their heads in agreement and approve your budget.
To summarize, I’m suggesting that an innovation-focused study is less likely to succeed without verbatims that are capture after the survey and segmentation have taken place. Not that the data is bad, but success requires action and action is best supported by a compelling story.
Good story telling requires more than just numbers. It requires more than loose correlations. It requires a beginning, a middle and an end. It also requires emotion, which can only be gotten from verbatims. And maybe even more importantly, it requires facts, because someone is going to ask you to trace that story back to a fact - and while they recognize data, it's less believable without a verbatim.
A qualitative verbatim is at best, a loose correlation. Don’t fool yourself into believing otherwise.