Your Storyboard Is Now a Source File
How to turn design docs into dynamic wireframes, journey maps & low-fi prototypes
Hello folks 👋
Picture the scene: you’ve spent three weeks carefully building a course storyboard, with every interaction mapped, every quiz question written, every piece of feedback carefully written.
You send the 40+ page to stakeholders to get signed off. The stakeholder approves it, you joyfully pass the storyboard to production. Then, a week or so later… CLANG — that urgent email into your inbox which says: “Step everything: this isn’t what I expected.”
If you’ve been in learning design for any length of time, you’ve almost certainly lived this moment at least once. Research tells us why it keeps happening: text-heavy storyboards overload reviewers, causing them to skim or disengage entirely — their “approval” isn’t really approval, it’s more like surrender.
On top of this, when people read the same text, they picture different things — the text feels like shared understanding, but it’s actually parallel misunderstanding. These gaps don’t surface until development begins — the most expensive possible moment to discover them (Ameer & Yusoff, 2012; Cirulli et al., 2017).
TLDR: The storyboard isn’t failing because it’s bad. It’s failing because text alone cannot carry the weight of communicating a multi-dimensional, interactive learning experience.
Moving Beyond the Wall of Text
Other design disciplines solved the “wall of text” problem years ago. Product teams, User Experience (UX ) designers, and Customer Experience (CX) teams use dynamic wireframes, journey maps, and prototypes to create shared understanding before committing to development (Nielsen Norman Group).
The reason learning designers haven’t done the same is practical: with 100+ other tasks to manage within the workflow, producing these assets was simply too expensive and too slow. Very few teams had the budget or skills to produce dynamic wireframes.
Early wire-framing tools like Balsamiq helped to move the needle a little, but the process remained laborious and impractical. The question I’ve been exploring in recent weeks is: does AI has the potential to really move the needle and change the equation entirely?
To answer this I ran some tests. The results were mixed but exciting. Here’s what I’ve learned so far.
The Test
Not put off by the poor results I got when testing wire-framing with “vibe coding” platforms, I decided to test a different approach to building assets with AI: writing simple prompts for the AI tools we use every day (Claude, ChatGPT etc) to generate single HTML files you can open, share, and test immediately.
I tested a simple five-step process as follows:
Upload a “wall of text” storyboard doc into an LLM
Prompt it to turn it into a dynamic HTML document for review (see prompts below)
Generate a dynamic document (iterate edit if needed)
Save it as an HTML file
Open HTML file in a browser (and/or share the file for others to look at)
I scored every output that I produced from this process on one core question: does this faithfully represent my actual storyboard, and could I use it professionally?
The results here were much better than I expected. Here are the headlines:
TLDR: Two AI models are capable of producing almost “production-ready” outputs. All models perform better than vibe coding platforms, and all perform well enough if you’re aware of their “failure modes” and actively correct common errors.
Of course, the AI tool you use and how you prompt it matters a lot. Let’s dive into the detail and hone your development skills.
Tests & Findings
I tested AI’s ability to create three different types of “review” document. Each of these helps to communicate different sorts of information to our stakeholders:
Here’s what I found:
1. Learner Journey Map
What it does. Borrowed from CX/UX journey mapping, a learner journey map lays out the full arc of the learning experience as a structured, multi-lane visual timeline you can scan in under a minute.
Why it matters. This sort of doc is a great antidote to doc-review fatigue. A 40-page storyboard becomes a single visual the whole team can point at and discuss. It surfaces the problems that are invisible in linear text: pacing issues, cognitive load spikes, transitions that don’t connect.
A journey map replaces the meeting where someone says “can you walk us through the storyboard?” — because the walk-through is already visible.
Use Claude. Claude scored 95/100 — the highest score of any output in the entire evaluation. It produced a 12-step, multi-lane map tracking phases, activities, cognitive load, risk points, and outcomes in parallel. ChatGPT (86/100) was solid but paraphrased more heavily — expect 13–15% of your storyboard detail to be “smoothed over” or abbreviated if you use OpenAI models.
Watch for:
Claude’s output is dense and text-heavy, so you may need to simplify it for stakeholder presentations.
If using Gemini (72/100): it hallucinates enrichment items (case studies, cheat sheets) that aren’t in your storyboard — check every entry against your source.
If using Copilot (58/100): expect ~50% of your journey detail to be missing; treat it as a rough outline, not a finished map.
The Prompt (upload your storyboard document, then paste this into your LLM):
“You’re a CTO at an ed tech. Using using the storyboard doc attached, give me the code I need to create a visualised journey map of module 1. Borrowed from UX and product discovery, a journey map visually shows how a learner experiences the course from start to finish (touch points, emotions, pain points, outcomes).”
👇 Here’s what Gemini produced - a basic overview with a LOT of missing info.
👇 Here’s what Claude produced - a dynamic doc with 95% accuracy first time.
2. Interactive Wireframe
What it does. A screen-by-screen representation of your course or module in low-fidelity sketch style — think Balsamiq or napkin-drawing aesthetic — with simulated interactions.
Every screen the learner would see is laid out individually. Drag-and-drop activities function. Quiz questions accept answers and return feedback. Accordions expand. It deliberately looks rough to signal “this is a blueprint, not a finished product” — but every interaction rule is represented in a testable way.
Why it matters. This tackles the interpretation gap head-on. Your storyboard says “drag and drop activity.” Every person who reads that pictures something different.
An interactive wireframe eliminates the guesswork: reviewers can do the activity, read the feedback, and experience the interaction logic.
Use Claude. Claude scored 94/100, producing a 16-screen wireframe where every interaction type is testable — drag-and-drop, quizzes, accordions, click-to-reveal, polls, and scenario branching — all with verbatim storyboard content. This was perhaps the single most impressive output in the entire evaluation.
ChatGPT (81/100) consolidated the module into only 6 screens and read more like a polished prototype than a design blueprint.
Watch for:
Claude can be over-granular — 16 screens with 16 nav buttons may need trimming.
ChatGPT’s version looks slicker but loses the screen-by-screen structure that makes a wireframe useful for developer handoff.
If using Gemini (58/100): 55% of your content will be missing and drag-and-drop interactions are barely functional — you’ll need to rebuild most of it.
If using Copilot (74/100): the wireframe aesthetic is decent but content is heavily abbreviated and feedback logic is weak; plan to manually restore your instructional copy.
Prompt (upload your storyboard document, then paste this into your LLM):
“You’re a CTO at an ed tech. Read the course storyboard doc attached. Then, create me the HTML and any other code I need to build an optimal, screen by screen interactive wireframe of module 1. Keep it clean and clear, Balsamiq style. Check your work and ensure 100% alignment with the document before you share the output.”
👇 Here’s what Gemini produced - looks sleek, but lacks the correct structure and content:
👇 Here’s what Claude produced - successfully lifted 94% of the storyboard content and structure from the source doc in v1:
3. Static Storyboard
What it does. A non-interactive, screen-by-screen blueprint of your module — every screen the learner would see, laid out vertically in a single scrollable document with wireframe-style sketching. Flip cards show front and back. Quiz questions show the stem, options, correct answer, and feedback text. Drag-and-drop zones show the phrases and categories. Nothing clicks, nothing animates — but every content element and interaction rule is visible and labelled.
Why it matters. Interactive wireframes are impressive, but they can actually slow down a review. Stakeholders start using the prototype instead of evaluating the design. A static wireframe strips away the distraction of interactivity and forces attention onto the things that matter at the review stage: is the content right? Is the screen sequence logical? Are the feedback messages clear? Is anything missing?
It’s also the format closest to what a developer actually needs. A developer doesn’t want to reverse-engineer quiz logic from a clickable prototype — they want to see every screen state, every feedback string, every correct answer laid out explicitly so they can build against it.
Use Claude. Claude scored 90/100 — producing a 12-screen wireframe in Balsamiq style with all four pillar card definitions (front and back), the full comparison table, all 10 drag-and-drop phrases, the quiz question with correct answers and feedback text, and both pass/fail result states on a single page. ChatGPT (85/100) took a different approach — more of a spec-style document with 17 screens and explicit developer annotations like “Wireframe: show chips + two bins” — less visual but arguably more implementation-friendly.
Watch for:
Claude’s output reads like a visual storyboard deck; ChatGPT’s reads more like a functional spec. Both are useful — pick whichever suits your review audience.
If using Copilot (68/100): expect heavy content abbreviation — the structure is there but you’ll need to manually restore most of your instructional copy from the storyboard.
If using Gemini (37/100): it generated a wireframe for an entirely different course. The storyboard was ignored completely. Do not use Gemini for this task.
Prompt (upload your storyboard document, then paste this into your LLM):
“You’re a CTO at an ed tech. Read the course storyboard doc attached. Then, create me the HTML and any other code I need to build an optimal, screen by screen basic (not interactive) wireframe of module 1. Keep it clean and clear, Balsamiq style. Check your work and ensure 100% alignment with the document before you share the output.
👇 Gemini’s wireframe - looks good on first glance, but it hallucinated most of the content:
👇 Here’s what Claude produced - successfully lifted 90% of the content and structure from the source doc in v1:
The Quick Reference
Here’s a quick look-up of all of my test results:
The Storyboard: From Output to Source File
Nothing here replaces the storyboard. The storyboard is where your expertise lives — your understanding of how people learn, what sequence works, which interactions drive the right cognitive processes.
What AI is changing is what happens after you write it.
Until now, the storyboard was both the design document and the communication document — and it was asked to do both jobs with nothing but text. That’s where sign-off breaks down. Not because the thinking is wrong, but because a 40-page Word doc is the wrong medium for getting a room full of stakeholders to genuinely engage with a multi-dimensional, interactive experience.
The storyboard is now the source file — the single point of truth that feeds a suite of dynamic review assets. A journey map that makes pacing and structure visible at a glance. A wireframe that lets reviewers experience the interaction logic instead of imagining it.
A static blueprint that gives developers every screen state, every feedback string, every answer laid out explicitly. Each one generated in under a minute, each one solving a different communication problem.
This is what smarter sign-off looks like: not asking stakeholders to read harder, but giving them the right asset for the right conversation. When a reviewer can see the flow, click the interaction, and scan the screen sequence, their approval actually means something.
Try it for yourself and see what happens!
Happy innovating!
Phil 👋
PS: Want to learn how to become more efficient and effective with help from AI? Check out my AI & Learning Design Bootcamp.








