What makes a "good" need statement for JTBD?

The pursuit of perfection is the pursuit of waste
Transcript

No transcript...

Let me start off by saying…

“There is no perfect customer need statement, no matter how many times someone says there is.”

Okay, let’s move on.

Let’s instead talk about what a good customer success statement is. Good as in good enough. What characteristics should a good customer need statement have? Perhaps, one that all involved parties clearly understand and interpret the same way. Here are a few more thoughts:

  • It should not fatigue a survey respondent

  • It should be interpreted by everyone the same way (common language makes that much more likely)

  • It should not make your check writer have buyer’s remorse

  • It should blend well into actionable themes

  • It should be simple to incorporate it into storytelling without heavy modification or workshops (which could change the meaning and thus wander away from the data)

  • It should be closer to an actual desired outcome of an end user as opposed to the efficiency within a task needed to achieve the outcome

  • It should not require practitioners to memorize the components, or follow any arbitrary (i.e., made up) rules

These statements - regardless of format - are so much more valuable than a pain point uncovered in a workshop. But, within the world of developing measurable value models for Jobs to be Done, there is still room for improvement. In fact, the pursuit of perfection has actually wandered off the path of improvement into the realm of confoundery (IMHO).

Can I get you to share this post with your colleagues?

Share

I don’t know anyone that can listen for desired outcomes statements in real-time and come up with more than 5 in an hour long interview (unless you’re the ultimate authority of what a good statement is). One person simply cannot do everything that’s required. There are too many rules, and when more than 5 of these statements surface, the quality review sessions end up going off the rails. Again, in my humble opinion as someone who has witnessed it first hand.

I just took notes and didn’t bother validating the statement structure with the interviewee (suggesting they do agree to them introduces bias). They don’t talk like that or they would have just handed me a list and collected their payment.

It’s just easier to simplify things - unless you use AI, which I highly recommend. It doesn’t matter what version you prefer, because I can generate customer success statements in any form you like, and I can generate thousands per day. Maybe more! And they come out perfect! No need to run them through some statement validator LOL.

Yes, and they’re very, very good. Wait, I did say there were perfect…

Sorry, I’m getting a little full of myself here 🤣


As we investigate the horizontal world of experiences, how important is it to be so precise that we must risk a revolt with a contrived metric structure? I would argue that this is extremely unnecessary. No one’s going to spend a quarter million dollars evaluating a customer journey when they can put extreme price pressure on any consulting firm to do it for $50k. They know they’re gonna get garbage, but there’s budget that has to be spent!

The interesting thing is that this (journeys and experiences) is where all the money is being spent, not on product innovation. So, there’s a real opportunity for ya’ll to go out there and sell a ton of inexpensive strategy engagements around experience, while undercutting all those prestigious firms, and still provide 100x the precision. Just a thought. There are many more places where this approach can be applied…

…but I digress (which is how you know this isn’t AI).


Let’s talk about precision for a moment. We are developing dozens (many times scores) of metrics (success statements) for each job, which brings a high degree of precision all by itself. In addition, we capture abundant dimensional data that allows us to slice and dice all day long…and into the night. So, how perfect does a single metric need to be?

  • Minimize the time it takes to ensure that team roles are aligned with individual career goals, e.g., offering growth opportunities, etc.

This perfect need statement is focused on how much time it takes to ensure something. What it doesn’t do is help us understand the accuracy of the method or solution. Some would say that this is implied since if you can reduce the time it takes to ensure something, it’s quite clear that you’ve done so accurately. Therefore you don’t need a likelihood version.

That doesn’t seem perfect to me. How about you?

It is argued that tripping off the direction of improvement (minimize, which it always is) and the metric (time or likelihood) will make everything important, thus introducing a bias the ratings (don’t get me started on this topic). So, let me get this straight, if I don’t think it’s important to minimize the time it takes to do something, there’s a strong possibility that I will rate “How important is it to you to quickly and accurately…” to be very important?

  • Ensure that team roles are aligned with individual career goals, e.g., offering growth opportunities, etc.

I’d like to see the scientific data proving that this is the case. You can’t just say its the case, or that your dog ate the data.

The simple approach ensures (no pun intended) that we only need a single metric because we’ve constructed the lead-in to have the respondent think about multiple dimensions. Once we’ve done our segmentation we can follow up with a handful of respondents (which we need to do anyway) to understand the time and/or accuracy distinctions.

If you don’t follow up after a survey, you’re making assumptions about a segment or priority group and will never pull the emotions and stories out

I’ve talked about the need to follow up with real interviews of a handful of people that you KNOW are in a segment, or KNOW are in a high priority group. This gives them WEIGHT because you understand their relevance before you speak to them.

Here are some examples to fill your mind

Note: I took the first 5 from two generated versions, so they are not going to match although there are matches across the catalogs. These are not prioritized so they do come out randomly.

Job: Converting an anonymous prospect into a customer
Job Executor: Chief Marketing Officer (for the sake of simplicity)

Desired Outcome Statements

The lead-ins for these in a survey would usually be:

  • How important is it for you to…

  • How satisfied are you [optional: given your current solution] with your ability to…

Step: Identify Target Audience

I’ve gotta say, even though I’m arguing against this version, these are pretty damn good! I see a few rule breakers (which I’ll fix) but still…

  1. Minimize the time it takes to analyze demographic data of potential customers, e.g., age groups, income levels, etc.

  2. Minimize the time it takes to identify key interests and preferences of the target market, e.g., hobbies, purchasing habits, etc.

  3. Minimize the time it takes to segment the audience based on behavioral patterns, e.g., online activity, brand interactions, etc.

  4. Minimize the time it takes to research and understand the cultural nuances of the target audience, e.g., language preferences, cultural sensitivities, etc.

  5. Minimize the time it takes to evaluate the effectiveness of past marketing strategies on similar audiences, e.g., campaign response rates, engagement levels, etc.

Customer Success Statements (the KNOW version)

Breaking away from the constraints of ODI, we could reimagine our lead-ins to be something like this:

  • How important is it to quickly and accurately

  • How frequently do you need to quickly and accurately

  • How frustrated do you get when you need to quickly and accurately

Or you could use importance + difficulty (or effort) if you’d like to use two scales. You could also use satisfaction, but in my mind that’s a lagging indicator that was designed for the 90’s.

Step: Identify Target Audience

  1. Know the demographic characteristics of your target audience, e.g., age, gender, income level, etc.

  2. Determine the geographic location of potential customers, e.g., urban, suburban, rural areas, etc.

  3. Identify the psychographic traits of your ideal customers, e.g., lifestyle, interests, values, etc.

  4. Assess the online behavior patterns of your target audience, e.g., preferred social media platforms, browsing habits, etc.

  5. Recognize the purchasing power and buying habits of potential customers, e.g., frequent purchases, high-value transactions, etc.

All I can say to sum this up is that there is no perfect way. You will be told you must do it a certain way; but I’m telling you, you can do it any way you like. If you ever need help there are a few ways I offer it:

  1. I do offer end-to-end consulting if you’re just not ready to do it all your own. I’m 25x faster and at least 10x cheaper than your alternatives. Big Brands: This means you can get many more problems solved with your existing budget (I work with a global team of experienced practitioners). Small Brands: This Bud’s for you, too.

  2. I also offer coaching, if you’d like to know someone’s got your back and you want to do the heavy lifting and get some knowledge transfer, I’m there!

  3. I can help you get your qualitative research done in 2 days for mere budget scraps.

  4. I’ve also got an academy where you can find a number of options for a do-it-yourself experience

0 Comments
Practical Innovation w/ Jobs-to-be-Done
Practical Innovation w/ Jobs-to-be-Done
Mike Boysen shares insights into the evolution of Jobs-to-be-Done, especially in the age of Generative AI. He makes the previously secret process more accessible new approaches and automated tools that vastly reduce the time, effort, and cost of doing what the large enterprises have been investing in for years. This will be especially interesting for the earlier stage, smaller enterprises, and those investing in them who have always had to rely on a superstar, or guess (or maybe that's the same thing!). So...check it out!