Universal Jobs-to-be-Done Model: Interacting with a Software Feature
This jobs to be done model takes a look at how software end users evaluate their experience when interacting with the software features.
This is a special post I will occasionally publish for paid subscribers. It’s the least I can do, even though I never intended to do so
End users of a software application have to navigate through some sort of an interface layer that contains objects/elements, which hopefully provide clues as to their purpose, and whether they should be used to trigger a desired action. Many factors could cause a user to struggle as they attempt use a feature through one or more user interface elements. The goal of the designer and the developer should be to minimize those struggles for the various circumstances end users find themselves in.
This could be a used to survey a user base (after the fact) or converted into a set of worksheets for workshop participants to provide feedback earlier in the design process. Most of this should be common sense to an experienced software designer.
This is based on my modified version of Jobs-to-be-Done modeling using inputs from a number of thought leaders who I take very seriously. It may not be perfect, but it’s better, much better, than failing fast.
⭐ Note on model integration: this universal model is designed to provide tight integration between metrics and steps, steps and the job, and ultimately the job and related jobs, or the job as a step in a larger job. This system is used to test-fit components of the model while building it, as well as to elaborate the prioritized metrics that describe a struggle within a segment into a compact and portable job story
Job Story Structure
As a [Job Executor] + who is + [Job] + [Situation] you're trying to [Outcome] + "faster and more accurately" so that you can successfully [Job Step]
Example:
As a software end user who is interacting with a software feature you're trying to know which user interface element will trigger the desired action faster and more accurately so that you can successfully identify the feature you need to use
Capturing Responses
I'll include a full survey instruction-set for this model for paid subscribers in the near future (it requires far more than the job map and performance metrics). In the mean time, here is a simple depiction of how best to present performance metric questions within a survey. Generally, you'll include one job step per page. Depending on the survey, we might pipe in specifics into the headers for each question, e.g., "when using Salesforce.com", or "when using your calendar software", etc.
Jobs-to-be-Done models do not need to be overly complicated or lengthy. In fact, the longer they are, the more respondents you'll need to ensure statistical viability. While we like to make them MECE (look it up), for universal models such as this one, we can remove a lot of the specificity. Remember, these are supposed to be directional. They do not output a solution; they simply point to the needle in the haystack so designers can conceptualize solutions around the correct needs-based target.
Job Map
Identify the feature you need to use
Know which user interface element will trigger the desired action
Know which user interface element will trigger an undesired action
Know where the element is located within the user interface, e.g., clearly visible, not obscured, etc.
Know all of the actions the user interface element can initiate, e.g., clearly labeled, etc.
Know that the element can be used on your current device, e.g., via mouse, via touchscreen, etc.
Trigger the desired action
Know how to interact with the feature, e.g., click it, long press, short press, swipe it, talk to it, etc.
Avoid physical discomfort when interacting with the feature, e.g., fatigue, chronic or acute pain, etc.
Know when you can interact with the feature, e.g., needs data, down-stream process not ready, etc.
Know if any shortcuts exist when using the feature, e.g. keystroke combination, etc.
Verify that the feature is ready to use, e.g., not in a disabled state, not dependent on another feature, etc.
Verify how the feature works on your current device
Know which part of the feature to interact with
Know how to input additional data needed by the feature, e.g., the correct data type, the correct sequence, etc.
Verify that the feature has received all of the data this needed
Ensure the feature is doing its job
Know that the action has been triggered, e.g., running in foreground, running in background, etc.
Know what types of feedback the feature offers
Know the meaning of the feedback you receive
Know when the feature is providing feedback
Know when you should take action on feedback
Know how long the feature will be active
Make sure a problem didn’t occur
Verify you didn't inadvertently trigger the wrong action
Verify that you didn't inadvertently modify data
Verify that you didn't inadvertently corrupt your data
Verify that you didn't inadvertently move your data to a new location
Verify that you didn't inadvertently share your data
Verify that you didn't inadvertently delete your data
Verify that you didn't inadvertently duplicate your data
Verify that you didn't trigger the feature multiple times
Correct any errors that occur
Know how to recover data that was inadvertently modified
Know how to recover data that was inadvertently corrupted
Know how to recover data that was inadvertently moved to a new location
Know how to un-share data that was inadvertently shared
Know how to recover data that was inadvertently deleted
Know how to de-duplicate data that was inadvertently duplicated
Know how to stop a process that was inadvertently triggered multiple times
Know how to stop a process that was inadvertently triggered
Know the root cause of an error condition
Know how to prevent future errors
Finish executing the action
Know when the triggered action has completed
Know that the triggered action produced the intended results
Avoid processing results of the action that were not intended
Know how to use the results of the triggered action, e.g., who or what will use the output, etc.
Know how to format the results for specific consumers, e.g., prepare for printing, prepare for input into another process, etc.
Know how to categorize the results for specific consumers
Know to sequence the results for specific consumers, e.g., sort, etc.
Know how to share the results with specific consumers
Know when to share the results with specific consumers, e.g., real-time, batched, etc.
Share the results with specific consumers
Interacting with a Software Feature © 2021 by Michael A. Boysen is licensed under Attribution-NonCommercial-ShareAlike 4.0 International
Your design team will undoubtedly want more information once you’ve identified the specific areas where end users are struggling (which step, which performance metric). Instead of overloading the model with an overwhelming set of attributes, go back to the end users from the segment that is struggling and capture their verbatims. This will ensure they are relevant, and specific to the segment.
I’ll throw one more thing in here and I’m happy to take any feedback on this. Related jobs are things that the end user may need to accomplish before, during, or after their involvement the job we’re studying. I felt like these are a solid foundation for related jobs. These will also be incorporated into the survey and rated on dimensions of importance and difficulty.
Related Jobs
Enable a feature, e.g., in a disabled state, etc.
Save data, e.g., store data for future retrieval, etc.
Secure data, e.g., restrict, protect, etc.
Retrieve data
Delete data, e.g., merge, remove, etc.
Edit data, e.g., update personal profile data, change a contact’s phone number, deduplicate data, spellcheck, etc.
Format data, e.g., prepare data for printing, prepare data for export, etc.
Organize data, e.g., categorize, sort, etc.
Share data, e.g., to social networks, to contacts, export lists, print to a printer, etc.
This is in response to Wil Pannel who commented on Locals:
@mikeboysen , @wil-pannell Wil, I haven't really gotten into the solution space yet with this. There is a reason I didn't use first person, and it all ties back to my attempt to make job mapping and metrics more logical. In terms of transitioning from survey results to this story format, I maintained the "you" and "your." However, I can definitely see your point when it comes to user stories.
I wrote about that here https://jobstobedone.substack.com/p/a-better-way-to-use-job-stories
I've also written recently about the need to take the priority metrics from a segment (post survey) and attempt to locate respondents from the segment to get verbatims after the survey, because they are the only verbatims that mean anything. So, the goal would be to elaborate any "story" format with actual verbatims before handing off to teams working on concepts or actual designs.
I value your inputs because I believe that the next major breakthrough for JTBD is strengthening the hand-off to teams that can use the data
I'm looking for feedback so don't be afraid to share your thoughts. I didn't invent this concept, but I'm trying to make it simpler to use and simpler to consume the results