Below is a structured explanation and high-level blueprint for how one might design a complexity-factor–inclusive simulation environment using a system-dynamics or agent-based modeling approach (or a hybrid). It integrates the concept of Outcomes-Driven Innovation (ODI) with the ability to run “what-if” scenarios around the emergence of new complexity factors, all without relying solely on real human survey data. I’ll break this down into three key parts:
Is System Dynamics Appropriate (and/or Necessary)?
High-Level Analytical System Design
What a “Person” (Agent) Consists Of
Is System Dynamics Appropriate (and/or Necessary)?
Why System Dynamics?
Feedback loops: One of the main values of a system-dynamics approach is its ability to capture and model feedback loops. In the context of ODI, the introduction of new solutions can create new complexity factors (or alter existing ones), creating reinforcing or balancing loops. For example, a new feature might improve satisfaction initially but introduce added complexity in the longer term, making the overall environment more complicated. System dynamics is highly adept at modeling such loops over time.
Aggregated perspective: Traditional ODI research often focuses on the voice of the job executors. They each evaluate (rate) importance and satisfaction of certain outcomes. But when layering in complexity factors, you start to track system-level dynamics (e.g., how does a solution in one domain shift user expectations and complexities in parallel or adjacent domains?). System dynamics is great for capturing that big picture.
Why Might an Agent-Based Model (ABM) Also Be Appropriate?
Heterogeneity: OD practitioners know that not all job executors have the same context or constraints. Agent-based models allow us to simulate large numbers of heterogeneous “people,” each with their own situational factors, context, and unique decision-making heuristics.
Emergent phenomena: Complexity factors, by definition, can create emergent outcomes when scaled across many agents. Agent-based modeling is a natural fit for capturing how localized behaviors can aggregate into system-wide patterns or the emergence of entirely new complexity factors.
Combining System Dynamics & ABM (A Hybrid)
In many real-world situations, the “backbone” of the complexity is best explained in a system dynamics causal loop diagram that describes how complexity factors and outcomes interplay. Then, within that structure, you can embed an agent-based approach where each agent represents a job executor with:
Personalized rating logic for importance/satisfaction
Reaction to new solutions (adoption or rejection)
Propensity to generate or amplify new complexity factors
Verdict: For the type of multi-factor simulation described here, it is both appropriate and beneficial to consider a system-dynamics approach—potentially in combination with an agent-based model if you want to capture the emergent behavior of multiple types of job executors in different contexts.
High-Level Analytical System Design
This proposed solution aims to simulate how complexity factors (and their interplay with desired outcomes) evolve over time as new solutions are introduced. It is based on an “army of agents” approach replacing typical scenario analysis.
Step 1: Model Setup and Complexity Factor Definition
Define Job Outcomes: As in standard ODI, list the “success metrics” (the job outcomes) that each agent will aim to optimize (importance vs. satisfaction).
Define Complexity Factors: These are not success metrics but may influence how easy or hard it is to achieve success. Some example complexity factors:
Availability of resources (time, budget, materials)
Environmental constraints (regulations, climate, geographical constraints)
Organizational culture or policy
Technological constraints and interoperability issues
Each complexity factor should have:
A scale or range (e.g., 1-5 or 1-7) for the “level of influence”
A rule set describing how it affects each outcome dimension (importance, satisfaction, or both)
Step 2: Agent Definition
Attributes: Each agent (representing a “person” or “job executor”) would have:
Context: Industry, role, region, etc.
Situational Factors: Time constraints, budget constraints, risk tolerance, etc.
Attitudes & Behaviors: Past experience with certain solutions, brand loyalty, baseline inclination to adopt new technology.
(More details on the “person” or agent below in section 3.)
Step 3: Causal Loop Modeling / System Map (System Dynamics Layer)
A simplified approach might involve:
Identify Key Stocks and Flows:
Stock = “Adoption Level of New Solutions”
Stock = “Prevalence/Intensity of Complexity Factor X”
Flow = “Rate at Which Complexity Factor X Emerges or Increases”
Flow = “Rate at Which Complexity Factor X is Mitigated/Reduced”
Causal Links:
New solutions introduced → might reduce some complexity factors but → might spawn new ones.
Increasing complexity → might reduce average satisfaction → might create demand for other new solutions.
Step 4: Simulation Engine
Initialization: The system starts with a set of n agents, each with initial settings for outcome importance, outcome satisfaction, and complexity factor influence.
Timestep Iterations: Could be monthly, quarterly, or annually. Each iteration:
Each agent “reacts” to the current environment (complexity factors + solutions available).
Each agent updates their personal satisfaction or dissatisfaction levels, potentially adopting new solutions or rejecting existing ones.
Emergent new complexity factors might appear or intensify if certain thresholds are reached (e.g., “When more than 20% of agents adopt solution Z, complexity factor W emerges”).
System-level stocks (e.g., total adoption) are updated. Complexity factor intensities are updated.
Data Collection: Track how overall satisfaction, importance ratings, and the distribution of complexity factors shift over simulation time.
Step 5: Analysis of Results
Sensitivity Analysis: Tweak different complexity factors (or their strength) and see how it changes system outcomes.
Scenario Exploration: Introduce hypothetical solutions (e.g., “Solution A aims to reduce time complexity at the expense of higher cost complexity”) to see how agents respond.
Emergent Factor Identification: Track under which conditions new complexity factors are triggered or become significant.
Potential AI/ML Components
Agent Decision Logic: Could use a machine-learning model for each agent’s decision-making. For instance, a reinforcement learning approach might determine how agents adapt over time to new solutions in a changing environment.
Complexity Factor Generation: A generative AI component could be used to hypothesize new complexity factors based on patterns in the simulation (e.g., “When X and Y are high, a new complexity factor Z might become relevant”).
Adaptive System Dynamics: Some advanced system-dynamics platforms incorporate optimization or ML to tune the parameters of feedback loops.
What a “Person” (Agent) Consists Of
A core principle of ODI is that the job executor is a composite of situational factors and contexts, rather than just demographics. In this simulation, we want each agent to be a multi-dimensional “person” with:
Situational Factors
Time Pressure: Is the agent facing tight deadlines, or do they have flexibility?
Budgetary Constraints: Are they operating under strict budget rules or do they have discretionary spend?
Environmental Constraints: Could be physical location, weather, or even cultural norms in that region.
Contextual Factors (Job-Related)
Role/Function: Are they in an operational role, a managerial role, or a specialized technical role? This influences priorities for certain complexity factors.
Job-to-be-Done: The specific outcome they care about (e.g., “I need to create a marketing campaign faster” vs. “I need to run a manufacturing line more efficiently”).
Behavioral Propensities
Adopter Profile: Early adopter vs. late majority vs. laggard. This influences how quickly they might adopt or even consider a new solution.
Risk Tolerance: Agents with higher risk tolerance might try more cutting-edge solutions, potentially encountering new complexity factors first.
Social Influence: Sensitivity to peer usage. (In some models, if enough agents in a social network adopt a solution, an agent might follow suit or react in the opposite direction.)
Outcomes Profile (Importance vs. Satisfaction)
Importance Ratings: For each outcome in the job, how important is it to this agent? (ODI scale: 1–5 or 1–10)
Satisfaction Levels: For each outcome, how satisfied is this agent currently with their set of tools/solutions?
Complexity Factor Experience
An array or vector describing how each complexity factor impacts them (e.g., “Time Complexity is 4 out of 5 for me,” “Environmental Complexity is 2 out of 5,” etc.).
Possibly dynamic, so as new solutions or changes happen, these complexities can fluctuate.
Interaction Rules (Emergent Agency)
Decision Logic: “If (Satisfaction) < threshold → search for new solution,” or “If new complexity factor emerges, re-rate satisfaction.”
Network Effects: If you want to replicate “word of mouth” or social proof, each agent could be influenced by neighbors in a network graph.
In sum, a “person” in this agent-based environment is a rich, multidimensional simulation entity whose situational and contextual factors drive emergent patterns in the overall system.
Putting It All Together
Feasibility: A system-dynamics–plus–agent-based approach is indeed both appropriate and advantageous for modeling how complexity factors interact with ODI-style metrics (importance and satisfaction) at scale.
High-Level System Design:
Define the job outcomes
Define the complexity factors
Build a causal structure or system-dynamics model
Embed an agent-based layer for heterogeneous ‘people’
Incorporate AI/ML for dynamic adaptation, complexity-factor emergence, and advanced decision-logic.
Simulation Steps: Initialize with a population of agents, simulate interactions over time, measure how solutions are adopted, how complexity factors evolve, and how that affects outcomes.
Results: Identify which complexity factors have the biggest influence on satisfaction and importance, how new solutions might generate new complexity factors, and how the adoption dynamics might play out differently in different hypothetical scenarios.
By leveraging this combined (or hybrid) approach, one can systematically replace or augment traditional survey-based scenario analysis with a dynamic, emergent “army of agents.” Each agent acts based on a curated set of situational and contextual variables, offering a more nuanced, future-proof way to explore the interplay between the job’s desired outcomes and the evolving complexity landscape.
Mike Boysen - www.pjtbd.com
Why fail fast when you can succeed the first time?
🚨 Join my FREE community for all of my AI Prompts, JTBD Courses and weekly AMA office hours: https://pjtbd.com/join
📆 Book an appointment: https://pjtbd.com/book-mike
🆓 Free stuff in my Notion database: https://pjtbd.notion.site
Share this post