Nice article, Mike. I also recommend using 5-whys to get to a simpler, higher-level JTBD. Another thing I've found useful in combating some of the resistance you talk about, is visualising how far serving a JTBD helps to achieve a specific business objective. I'll link to your post from our knowledge base, if that's ok with you. Do let me know if you'd rather I didn't.
Since I am using JTBDs in consulting engagements, and not in product / service innovation per se, your simplifications makes a lot of sense. Using minimize / increase up front does not help at all, and going to something like "know" makes things much easier without compromising precision.
The same applies to focusing on the step rather than on the metric. And since for consulting the surveys rarely apply, it is convenient to capture outcomes / ratings and verbatims on 1:1 interviews, even if they are adjusted as the projects advance. When this is done right to begin with, adjustments are welcome since they reflect a better joint understanding of what we are trying to change.
I do use the standard metrics, however, after this is done, since I must reach a consensus with my clients as to how we will jointly evaluate whether the outcomes have been met. There has to be a metric and, in many cases, a specific goal at that level. This does not seem to change what you are proposing, just adapting it to a specific situation.
hi Mike, thanks for the article. I'm currently conducting JTBD interviews and have a bit of dillema how to frame jobs that are done automatically. For example a marketing person in ecommerce business "sends customer review invitations", what actually happens is the software is doing automatically, but the person has to configure the campaign for that initially; so in this case would the job be "create a customer review invitation automation campaign" or "sending customer review invitations" can be considered a job itself? As I understand JTBD frame a job that follows sort of a process that should be executed from beginning to end everytime by a person.
I recommend not focusing on what the systems does. Ask yourself "why" is it sending them customer review invitations? Even though something has been automated doesn't mean it's solving the right problem, or completely solving it. Until you understand what is trying to be accomplished by this function, you won't have the input you need for building solution-agnostic map, or the metrics used for performance measurement.
thanks for the reply! I see, so I would define a job as a process which is done agnostically (by a person or a solution) and then break it down to concrete steps to see if they together achieve the outcomes the job performer is seeking or are there gaps/under-performance in any of the steps related to the job. This part for me was confusing, given the job performer doesn't perceive a automated job as something part of what they do, given they only set it up once, making harder to understand their needs in this regard.
If you are studying a "job" that is already highly automated two thing to keep in mind. 1) the job is still getting done and it might be worth learning if the automation is addressing the outcomes (performance metrics of the job) adequately, and 2) you may want to consider if what you are looking at is really only a part of a larger job. There's a 3) below!
If this is an innovation exercise, you may want to consider the current job as a step in a larger job and ask yourself if you could get more of that larger job done on a single platform, and perhaps differently. Getting the larger job done differently (not cobbling current point solutions together) is what leads to disruption. And people will pay more for it.
We don't think of Pandora as costing more, but the $$ are actually going to a different place in the value chain (which has changed). Subscriptions add up, and advertisements are paid for as well. Anyway, I digress.
Once upon a time people had to provide for themselves. If you wanted to stay warm in the winter, you needed to make sure to chop plenty of wood before winter came. Then you needed to tend to the fire, etc. You probably cooked this way as well. Today, this is automated. However, HVAC companies have developed products which need to be installed, maintained and replaced. Those are jobs you can study that are related to the automated job, if that makes sense. It's a horizontal view that might lead to ways to leverage the technology across many industries and/or brands in the future. That's another slice you could take at it.
hey, thanks for some generous food for thought! Indeed I'm looking somewhere between 1-2 trying to figure out if several of those automated jobs that combine themselves into something larger that job performers are seeking at the end of the day. From my 2 interviews I had so far, it seems people are just following "best practices" and sometimes aren't really aware of the end outcome/need in this case, that reminds me that I should maybe try to go up the ladder to figure out at least in the mind of job performer what they think is the bigger job / outcome without hitting the aspiration levels.
As for horizontal slice of automated jobs I never thought about it in that way, but I guess there could be some angle now that I think about it. In my context that could be marketing agencies that are setting up/improving automated jobs which I'm investigating for ecommerce companies, but that would be a different job performer segment and jobs overall.
Keep in mind, the various steps in a job - depending how you are defining it - could be performed by different solutions, some automated and some people-powered with tools, etc.
As for the horizontal, yes a service provider would be a different "job performer" whereas the original one may now simply be the beneficiary
I work with micro businesses who have small teams and are spread very thin. Always ping ponging between “chasing new clients in the door” and delivery of service. JTBD can make their sales process more efficient but they need a very simple way to implement this. I’ve been playing with how to boil it down to easy steps that they can create habits around. Jim Kalbach’s book has been a great resource for this, in my opinion. Any support or resources for this are welcome!
I'd love to hear what you come up with. The best thing I could offer is that customers are trying to make decisions. From a JTBD perspective, the best thing it can do is help you to understand what resources you can offer at the right time to help them make a favorable decision for the business as opposed to walking away or getting stuck.
We added one step in the ODI process to overcome this challenge.
The step is after the job map creation but before the quantitative survey.
We call it "Limit the Job Map to the outcomes we can realistically serve in the near-to-medium term future".
This is because, for a small company like us, some steps and outcomes are simply not viable to address - no matter how important and unsatisfied they are for our market.
This has allowed us to remove 37 out of the 144 outcomes in our market, a 26% reduction in the number of questions we need to ask.
In addition, it has allowed us to remove some steps in the job completely. We are removing steps fully if we cannot satisfy at least 75% of the outcomes in that step. Our reasoning here is that if we cannot perform at least 75% of all outcomes in a step in a satisfactory way, it's better to abandon that step completely, because customers simply will not be happy with our services in that step - better then to collaborate with partners who are experts in those steps.
This has allowed to remove even more outcomes - an additional 12 in addition to the ones above.
So in total, we can ignore 49 out of 144 outcomes, which will vastly simplify our quantitative research process.
Perhaps this is something SMEs can incorporate in general in the ODI process.
We have not yet performed our quantitative survey - so I'll need to report back how this affected the study once we do it, and if there are any pitfalls with doing it this way that I do not see yet. One such pitfall could be that we will miss the detection of partner collaboration opportunities for important outcomes we cannot address, but which we could address by forming partnerships. However, I see that as worth it, if our alternative is that the study simply becomes too expensive for us to do at all otherwise.
Big fan of your articles. They've been a great help as I've been learning how to practically use JTBD. I have a few questions:
1) When do you choose other words besides "know" in your outcome statements?
2) how do you ensure that the object of control is measurable? (or do you not)
3) do you find value in providing context or clarifiers for outcomes to help survey respondents? (like underlining importance, credibility, and points as components of meaning)
1) I don't "know" yet. I do use other more task-oriented verbs occasionally. It's easier to stick that in "universal" models that are very abstracted. I'm going through models and rethinking them to see if I can come up with rules. If you look at my analyzing the market of marketing you'll see how the steps are still task-oriented and the outcomes support them based on the test-fit structure I shared. I'm experimenting!
2) I'm rating the statement on its importance, and the difficult to achieve it. It's easier to speak English than explain it to someone in marketing :)
3) Examples in the outcomes are often an artifact of having a higher-level outcome. If your examples are finite, and you need the detail in your model, you should consider separate outcome statements. In a perfect world, the statement should be clear without an example
Regarding point #2: Modifying statements for presentation and then implementing the ODI form in a survey and then rewording them again has been done. However, I feel that the interpretation of two version of a statement has potential negative effects. I have chosen to make the change on the front-end and maintain it throughout (for now). This is based on a number of instances I've witnessed where there is push-back on the language before, during and after a study. I'm willing to meet them halfway, and my opinion is that the accuracy will still far exceed other types of research. It may even be just as good, but I'm not going to invest a lot of time and money proving it. That's throwing good money after bad. Just do SOMETHING
I'm still a JTBD newbie and I'll be asking a lot of stupid questions, hope you won't mind :-)
I can see how this simplifies the capture & the quantification of desired outcomes. Though how do you know when you've collected "enough" situations/circumstances/stories for each desired outcome?
Situations/circumstances/contexts are related to the job. I don't know when you've collected enough, I only know when the survey gets too long :)
As for stories, the important message here is that a segment will have a subset of outcomes - say 7-10 - that are being rated differently than the rest of the population. The key to the stories is to understand the common theme which made the segment rate them similarly. We try to capture this in the survey, but in my opinion there is no substitute for going back to the segment and doing a 1:1 for an hour. I get into that more here. https://jobstobedone.substack.com/p/youve-been-using-verbatims-all-wrong
I will continue to touch on this as I plow forward
Makes a lot of sense and addresses a real pain point folks have working with outcomes.
Hi Jim. Thanks for stopping by
Nice article, Mike. I also recommend using 5-whys to get to a simpler, higher-level JTBD. Another thing I've found useful in combating some of the resistance you talk about, is visualising how far serving a JTBD helps to achieve a specific business objective. I'll link to your post from our knowledge base, if that's ok with you. Do let me know if you'd rather I didn't.
Since I am using JTBDs in consulting engagements, and not in product / service innovation per se, your simplifications makes a lot of sense. Using minimize / increase up front does not help at all, and going to something like "know" makes things much easier without compromising precision.
The same applies to focusing on the step rather than on the metric. And since for consulting the surveys rarely apply, it is convenient to capture outcomes / ratings and verbatims on 1:1 interviews, even if they are adjusted as the projects advance. When this is done right to begin with, adjustments are welcome since they reflect a better joint understanding of what we are trying to change.
I do use the standard metrics, however, after this is done, since I must reach a consensus with my clients as to how we will jointly evaluate whether the outcomes have been met. There has to be a metric and, in many cases, a specific goal at that level. This does not seem to change what you are proposing, just adapting it to a specific situation.
hi Mike, thanks for the article. I'm currently conducting JTBD interviews and have a bit of dillema how to frame jobs that are done automatically. For example a marketing person in ecommerce business "sends customer review invitations", what actually happens is the software is doing automatically, but the person has to configure the campaign for that initially; so in this case would the job be "create a customer review invitation automation campaign" or "sending customer review invitations" can be considered a job itself? As I understand JTBD frame a job that follows sort of a process that should be executed from beginning to end everytime by a person.
I recommend not focusing on what the systems does. Ask yourself "why" is it sending them customer review invitations? Even though something has been automated doesn't mean it's solving the right problem, or completely solving it. Until you understand what is trying to be accomplished by this function, you won't have the input you need for building solution-agnostic map, or the metrics used for performance measurement.
thanks for the reply! I see, so I would define a job as a process which is done agnostically (by a person or a solution) and then break it down to concrete steps to see if they together achieve the outcomes the job performer is seeking or are there gaps/under-performance in any of the steps related to the job. This part for me was confusing, given the job performer doesn't perceive a automated job as something part of what they do, given they only set it up once, making harder to understand their needs in this regard.
If you are studying a "job" that is already highly automated two thing to keep in mind. 1) the job is still getting done and it might be worth learning if the automation is addressing the outcomes (performance metrics of the job) adequately, and 2) you may want to consider if what you are looking at is really only a part of a larger job. There's a 3) below!
If this is an innovation exercise, you may want to consider the current job as a step in a larger job and ask yourself if you could get more of that larger job done on a single platform, and perhaps differently. Getting the larger job done differently (not cobbling current point solutions together) is what leads to disruption. And people will pay more for it.
We don't think of Pandora as costing more, but the $$ are actually going to a different place in the value chain (which has changed). Subscriptions add up, and advertisements are paid for as well. Anyway, I digress.
Once upon a time people had to provide for themselves. If you wanted to stay warm in the winter, you needed to make sure to chop plenty of wood before winter came. Then you needed to tend to the fire, etc. You probably cooked this way as well. Today, this is automated. However, HVAC companies have developed products which need to be installed, maintained and replaced. Those are jobs you can study that are related to the automated job, if that makes sense. It's a horizontal view that might lead to ways to leverage the technology across many industries and/or brands in the future. That's another slice you could take at it.
hey, thanks for some generous food for thought! Indeed I'm looking somewhere between 1-2 trying to figure out if several of those automated jobs that combine themselves into something larger that job performers are seeking at the end of the day. From my 2 interviews I had so far, it seems people are just following "best practices" and sometimes aren't really aware of the end outcome/need in this case, that reminds me that I should maybe try to go up the ladder to figure out at least in the mind of job performer what they think is the bigger job / outcome without hitting the aspiration levels.
As for horizontal slice of automated jobs I never thought about it in that way, but I guess there could be some angle now that I think about it. In my context that could be marketing agencies that are setting up/improving automated jobs which I'm investigating for ecommerce companies, but that would be a different job performer segment and jobs overall.
Keep in mind, the various steps in a job - depending how you are defining it - could be performed by different solutions, some automated and some people-powered with tools, etc.
As for the horizontal, yes a service provider would be a different "job performer" whereas the original one may now simply be the beneficiary
great thanks again for the tips, I'm having next interview today with some more refinements will see if I can get more out of it.
I work with micro businesses who have small teams and are spread very thin. Always ping ponging between “chasing new clients in the door” and delivery of service. JTBD can make their sales process more efficient but they need a very simple way to implement this. I’ve been playing with how to boil it down to easy steps that they can create habits around. Jim Kalbach’s book has been a great resource for this, in my opinion. Any support or resources for this are welcome!
I'd love to hear what you come up with. The best thing I could offer is that customers are trying to make decisions. From a JTBD perspective, the best thing it can do is help you to understand what resources you can offer at the right time to help them make a favorable decision for the business as opposed to walking away or getting stuck.
We added one step in the ODI process to overcome this challenge.
The step is after the job map creation but before the quantitative survey.
We call it "Limit the Job Map to the outcomes we can realistically serve in the near-to-medium term future".
This is because, for a small company like us, some steps and outcomes are simply not viable to address - no matter how important and unsatisfied they are for our market.
This has allowed us to remove 37 out of the 144 outcomes in our market, a 26% reduction in the number of questions we need to ask.
In addition, it has allowed us to remove some steps in the job completely. We are removing steps fully if we cannot satisfy at least 75% of the outcomes in that step. Our reasoning here is that if we cannot perform at least 75% of all outcomes in a step in a satisfactory way, it's better to abandon that step completely, because customers simply will not be happy with our services in that step - better then to collaborate with partners who are experts in those steps.
This has allowed to remove even more outcomes - an additional 12 in addition to the ones above.
So in total, we can ignore 49 out of 144 outcomes, which will vastly simplify our quantitative research process.
Perhaps this is something SMEs can incorporate in general in the ODI process.
We have not yet performed our quantitative survey - so I'll need to report back how this affected the study once we do it, and if there are any pitfalls with doing it this way that I do not see yet. One such pitfall could be that we will miss the detection of partner collaboration opportunities for important outcomes we cannot address, but which we could address by forming partnerships. However, I see that as worth it, if our alternative is that the study simply becomes too expensive for us to do at all otherwise.
Mike,
Big fan of your articles. They've been a great help as I've been learning how to practically use JTBD. I have a few questions:
1) When do you choose other words besides "know" in your outcome statements?
2) how do you ensure that the object of control is measurable? (or do you not)
3) do you find value in providing context or clarifiers for outcomes to help survey respondents? (like underlining importance, credibility, and points as components of meaning)
1) I don't "know" yet. I do use other more task-oriented verbs occasionally. It's easier to stick that in "universal" models that are very abstracted. I'm going through models and rethinking them to see if I can come up with rules. If you look at my analyzing the market of marketing you'll see how the steps are still task-oriented and the outcomes support them based on the test-fit structure I shared. I'm experimenting!
2) I'm rating the statement on its importance, and the difficult to achieve it. It's easier to speak English than explain it to someone in marketing :)
3) Examples in the outcomes are often an artifact of having a higher-level outcome. If your examples are finite, and you need the detail in your model, you should consider separate outcome statements. In a perfect world, the statement should be clear without an example
Hope that helps. I'm still percolating on #1
Regarding point #2: Modifying statements for presentation and then implementing the ODI form in a survey and then rewording them again has been done. However, I feel that the interpretation of two version of a statement has potential negative effects. I have chosen to make the change on the front-end and maintain it throughout (for now). This is based on a number of instances I've witnessed where there is push-back on the language before, during and after a study. I'm willing to meet them halfway, and my opinion is that the accuracy will still far exceed other types of research. It may even be just as good, but I'm not going to invest a lot of time and money proving it. That's throwing good money after bad. Just do SOMETHING
I'm all for simplicity.
Could you please provide before/after desired outcome statement/s using ODI v.s. the simplified method?
When receiving a message from a 3rd party...
Rate on importance of, and satisfaction with ability to...
Minimize the time it takes to interpret the message, e.g., its main points, details supporting each point, etc.
Minimize the time it takes to assess the importance of the message
Minimize the time it takes to assess the credibility of the message
Minimize the likelihood of misunderstanding the message
versus
When trying to understand how to react to a message from a 3rd party
Rate on importance of, and difficult to
"Know what the message meant"
so that you can quickly and accurately react to a message
Thank you. Very helpful.
I'm still a JTBD newbie and I'll be asking a lot of stupid questions, hope you won't mind :-)
I can see how this simplifies the capture & the quantification of desired outcomes. Though how do you know when you've collected "enough" situations/circumstances/stories for each desired outcome?
Situations/circumstances/contexts are related to the job. I don't know when you've collected enough, I only know when the survey gets too long :)
As for stories, the important message here is that a segment will have a subset of outcomes - say 7-10 - that are being rated differently than the rest of the population. The key to the stories is to understand the common theme which made the segment rate them similarly. We try to capture this in the survey, but in my opinion there is no substitute for going back to the segment and doing a 1:1 for an hour. I get into that more here. https://jobstobedone.substack.com/p/youve-been-using-verbatims-all-wrong
I will continue to touch on this as I plow forward
Makes sense. Thank you!