AI Is Quietly Rewiring the ADDIE Model (In a Good Way)`
The traditional ADDIE workflow isn't dead, but it is evolving
Hey folks!
As we head into the third week of 2026 and one thing is clear — most conversations about AI in learning design still sound a lot like they did in 2023:
“AI will generate courses faster.”
“AI will replace instructional designers.”
“AI can make videos and write quizzes now.”
All of these things are true, but they’re also kind of missing the point.
The real story isn’t what AI can produce — it’s how it changes the decisions we make at every stage of instructional design.
After working with thousands of instructional designers on my bootcamp, I've learned something counterintuitive: the best teams aren't the ones with the fanciest AI tools — they're the ones who know when to use which mode—and when to use none at all.
Once you recognise that, you start to see instructional design differently — not as a linear process, but as a series of decision loops where AI plays distinct roles.
In this post, I show you the 3 modes of AI that actually matter in instructional design — and map them across every phase of ADDIE so you know exactly when to let AI run, and when to slow down and think.
Let’s dive in!
The 3 Modes of Working with AI
Before you think about tools, prompts, or workflows, ask yourself one question:
What kind of work am I actually doing?
There are three simple modes to choose from.
🎨 1. Vibe Prompting — Ideas & First Drafts
Use this when:
You want speed or inspiration
You’re exploring possibilities
Quality is easy to spot-check
Think:
"Brainstorm metaphors or analogies that could help engineers understand why documentation matters to end users."
"What are 8 unexpected and creative ways I could make compliance training on data privacy feel less boring and more relevant to customer support teams?"
"Generate 5 different hooks I could use to open a module on giving difficult feedback — something that grabs attention in the first 30 seconds."
How to do it: Just chat. Use any LLM (ChatGPT, Claude, Gemini) like you’d talk to a colleague. Prompt casually. Grab what’s useful, discard the rest. No process needed — as the expert, you’re the filter.
AI is your creative intern — fast, energetic, occasionally unhinged but in a good way. Great for exploration and ideation, but not for judgment and precision.
🤖 2. Automation — Delegate Repeatable Work
Use this when:
The task is repeatable
The method is clear and structured
Mistakes are low risk
Think:
“Every time we write learning objectives, format them using the approved template and process attached.”
“Convert these SME interview notes into our standardised insights report format using the exact semantic analysis process attached.”
“Tag all of these learner feedback comments using the four categories specified in the attached (clarity, engagement, pacing, technical). Then, generate a summary report using the template (also attached).”
AI needs you! These tasks need clear inputs — like a template, a process doc & example outputs — so AI knows exactly what “good” and "done" looks like.
How to do it: Without structured rules and steps, AI will produce inconsistent outputs of very inconsistent quality — sometimes following your format, sometimes inventing shortcuts. A structured process like FRAME™️ locks in quality so that every run of these repeatable tasks produces the same result, every time.
FRAME™️ workflow:
F — Find the Evidence: Use a research tool (Consensus, Perplexity Academic etc) to extract the rules for doing this task well (e.g., “step-by-step rules for well-written objectives”).
R — Role & Rules: In a deep-thinking LLM, define the model as a process bot that must follow those rules only, never invent new ones.
A — Assemble Inputs: Feed it real examples (before → after), a step by step process doc and an exemplar output , so it stops guessing.
M — Model, Measure, Modify: Challenge AI to explain its thinking and iterate until the output is reliably on-brief, then lock that pattern in.
E — Expand & Embed: When quality is stable, codify it as automation: a reusable template, a custom GPT/bot, or [for ultimate automation] an agent/Make/Zapier workflow that runs without you touching it.
AI is your ops assistant — consistent, disciplined and immune to the boredom that comes from repetition.
🧠 3. Copilot — Decision Support
Use this when:
The task requires nuanced judgment
Context really matters
Wrong answers are hard to detect
Think:
“Given these learners' prior knowledge and job constraints, which instructional strategy fits best — scenarios, worked examples, or discovery learning — and why?"
"We have three ways to deliver this compliance training: async self-paced, live instructor-led, or blended. Which one actually solves our constraint (time zones + busy schedules) for [learner profile] without sacrificing learning outcomes?"
"The data shows 60% of learners are struggling with this concept. Is it a knowledge gap, a practice gap, or a transfer gap — and what should we change?"
AI needs you! These tasks also need clear inputs — like a template, a process doc & example outputs — so AI knows exactly what “good” and “done” looks like.
How to do it: Again, without structured rules and steps, AI will produce inconsistent outputs of very inconsistent quality — sometimes following your format, sometimes inventing shortcuts. FRAME™️ locks in quality so that every run of these repeatable tasks produces the same result, every time.
FRAME workflow):
F — Find the Evidence: Start in a research tool to define what “good” looks like (e.g., evidence-based principles for practice, feedback, or transfer).
R — Role & Rules: Make the LLM your critical thought partner, not your boss: it must base its reasoning on that research summary and your constraints, and say “Not specified in the research” when it has to guess.
A — Assemble Inputs: Load in learner profile, constraints, current performance data along with a step by step process doc and an exemplar output, so it has the context it needs and stops guessing.
M — Model, Measure, Modify: Ask it to generate options, compare them, surface trade-offs, and then critique its own recommendation against your rules.
E — Expand & Embed: Instead of fully automating, E here usually means creating a reusable copilot template or team bot (GPT) that structures thinking (e.g., “always ask for options, pros/cons, assumptions, confidence level”) — but keeps a human as final decision-maker.

AI is your thought partner — not deciding for you, but helping you decide better.
TL;DR
💡 Ideas → Vibe (just chat with an LLM)
⚙️ Execution → Automate (FRAME™️, E → automation/agents, humans periodically in the loop to check the automation is functioning)
🧭 Decisions → Copilot (FRAME™️, E → reusable templates/bots, human always in the loop).
Now let’s apply that to ADDIE — the backbone of instructional design.
1. Analysis — Surface Evidence, Not Meaning
Analysis is about figuring out what problem you’re really solving. This phase sets the foundation for everything that follows: understanding your learners, the gap between current and desired performance, and what constraints you’re working within.
Designers often get into trouble by asking AI to jump straight to conclusions. Don’t. Use it to surface evidence and patterns — not meaning.
Key tasks you’ll own:
Conduct needs assessment – Identify the performance gap between current state and desired state
Analyse learners – Demographics, prior knowledge, learning preferences, access to technology
Perform task analysis – Break down what experts actually do (step-by-step procedures or cognitive processes)
Define success metrics – How will you know if training worked? (This feeds your evaluation strategy later)
AI is a lens, not an oracle. Let it sort, not decide.
2. Design — Where Copilot Matters Most
Design is where you decide what to build. This is also where AI outputs often sound great yet quietly miss the underlying learning logic. The risk is misalignment with learners, constraints, or performance outcomes.
Key tasks you’ll own:
Select instructional strategy – Which approach fits best (scenarios, worked examples, discovery learning, simulations, etc.)? Not just what sounds plausible.
Write learning objectives – Apply Bloom’s taxonomy, include performance conditions and criteria (ABCD or SMART format), ensure alignment to real-world performance
Design assessment strategy – Determine formative (during learning) vs. summative (final) assessments; ensure alignment to objectives
Sequence content – Use prerequisite logic to decide what learners must know first
Plan for Kirkpatrick evaluation – Design how you’ll measure Levels 1–4 (reaction, learning, behaviour, results) now, not as an afterthought
Design for accessibility – Plan WCAG compliance and universal design principles from the start
This is the phase where Copilot beats Automation every time. AI shouldn’t replace your design judgment — but it can stretch your reasoning, surface trade-offs, and help you make and defend your choices.
3. Development — Speed with Guardrails
Development is where you build the actual learning experience. This is where AI earns its keep — drafting, formatting, checking — all faster. But pedagogy and accuracy still need a human hand.
Key tasks you’ll own:
Write assessment items – Create quiz questions, rubrics, performance checklists aligned to objectives
Design feedback messages – Specify what learners see when they’re right, wrong, or partially correct
Develop multimedia assets – Brief video scripts, audio descriptions, graphics (or coordinate with designers/videographers)
Create storyboards – Screen-by-screen layout, interaction notes, visual descriptions, branching logic, navigation flow
Conduct accessibility audit – Alt text, captions, keyboard navigation, color contrast, screen reader compatibility
Facilitate SME review cycles – Validate content accuracy; incorporate expert feedback
Pilot test with target learners – User testing with 5–10 representative learners before full launch
Revise based on feedback – Iterative improvement of content, clarity, and engagement
Here, AI is the accelerator, but you’re at the steering wheel.
4. Implementation — Let AI Handle the Logistics
Implementation is often a logistics problem disguised as design. You’re coordinating people, systems, and communications. Automation shines here.
Key tasks you’ll own:
Configure LMS/platform – Upload content, set up access control, test functionality
Train facilitators (if instructor-led or virtual) – Hands-on practice, dry runs, Q&A sessions, not just materials
Create learner onboarding – How to access, navigate, get help, troubleshoot tech issues
Develop manager enablement – How managers can support learner application back on the job
Set up technical support – Help desk, FAQ, escalation procedures, troubleshooting resources
Manage enrolment – Registration systems, access provisioning, learner tracking
The result: smoother handoffs, fewer dropped balls, and a team that actually looks forward to launches.
5. Evaluation — Design Measurement, Own the Decisions
Evaluation might be where AI feels most magical — it can summarise data and spot patterns fast. But insights ≠ actions. You own the interpretation and decisions.
Key tasks you’ll own:
Design data collection plan – Specify what, when, how, and from whom across all Kirkpatrick levels
Measure Level 1: Reaction – Learner satisfaction, engagement, relevance (immediately post-learning)
Measure Level 2: Learning – Knowledge/skill acquisition via pre/post assessments or quizzes
Measure Level 3: Behaviour/Transfer – On-the-job application and behaviour change (30–90 days post-learning)
Measure Level 4: Results – Business impact and ROI (align to original business goals)
Analyse data – Both quantitative (statistics, significance) and qualitative (themes, quotes, patterns)
Report to stakeholders – Tailor findings and recommendations by audience (exec summary vs. detailed report)
Plan improvements – Prioritise changes, allocate resources, schedule iteration cycles
AI is incredible at summarising what happened — you still decide what to summarise and how, and what to do about what you learn.
The Big Picture in 2026
When it comes to the typical instructional design workflow, I have two key predictions for 2026:
1. Linear processes like ADDIE won’t die — but they will evolve to look a lot different from how it traditionally looked.
2. AI won’t replace instructional design, but it will reshape how we complete tasks and redefine where our human judgment adds the most value.
The most effective learning designers right now aren’t chasing better prompts or shinier tools; they’re getting very good at one thing: knowing when to delegate work to AI, when to make decisions with AI, and when to work alone.
If you remember one thing, make it this:
🎨 Use vibe prompting for speed & exploration
🤖 Use structured AI automations for repeatable execution of low-risk tasks
🧠 Use carefully designed AI copilots to support you to make the decisions that matter most
That’s what separates teams who just use AI from those who actually design with it.
Happy experimenting!
Phil 👋
PS: Want to learn how to apply AI at each step of your process with me and a community of people like you? Apply for a place on my bootcamp!










