Beyond Faster Content: How AI is Quietly Rewiring How L&D Works
Aka, a sneak peak of findings from my 2025 survey with Synthesia
Hey folks! 👋
Last month, I partnered with Synthesia to survey over 400 L&D professionals globally about how AI is reshaping their work (a BIG THANK YOU if you took part!). This is the second year we’ve run the research, and the shifts from 2024 to 2025 are pretty significant.
The team and I are still running deeper analysis and preparing the official findings for release next month — what follows is an exclusive sneak peek of what we’re seeing so far, based on The State of Instructional Design Survey (2025) that I ran with colleagues at Synthesia.

TLDR: in the last 12 months, AI has crossed a threshold in L&D. It has moved from experimental tool to everyday practice — and, for a growing minority, to something closer to operational infrastructure.
At the same time, the data also reveals some uncomfortable tensions: adoption is racing ahead of governance, and most teams are still using AI to go faster, not yet to work smarter.
Let’s dive in. 👇
Finding #1: AI Use Is Expanding Across the L&D Workflow
In line with other data points, our 2025 data shows AI use spreading across the entire learning lifecycle, not just the “build” stage.
Then (2024):
AI was primarily a content-creation accelerator — drafting scripts, generating quiz questions, creating basic video and visuals.
Now (2025):
AI is still used mostly for Design & Development tasks, but the way it’s used has deepened and expanded:
Design & Develop
Brainstorming topics and angles
Drafting objectives, outlines, scripts, case studies, scenarios, emails and comms
Producing AI video (avatars, synthetic presenters), voiceover and graphics
Rapid translation and localisation across multiple languages
Analyse
Summarising SME interviews and long documents
Clustering learner feedback and survey responses
Supporting early-stage needs analysis and portfolio prioritisation
Evaluate
Summarising usage and feedback data
Flagging low-performing or outdated content
Informing decisions about what to improve, consolidate or retire
Implement (early, but real)
Pilots of AI-powered Q&A assistants and chatbots
Rules-based personalisation and content routing
For most teams, the clearest ROI is still production efficiency: faster, cheaper, more scalable creation of assets, especially video and multilingual content. But the interesting shift is this:
We’re moving from “AI helps us build stuff” to “AI also helps us decide what’s worth building, improving, or retiring.”
And crucially, human-in-the-loop remains the default. Respondents consistently describe AI as a drafting partner and accelerator — with humans still owning learning science, contextual judgement, quality assurance, ethics and brand voice.
Finding #2: We’re Shifting from Tools to Stacks
Another big shift this year: AI in L&D is no longer about hunting for a single “magic tool.”
In 2024, most teams had a handful of point solutions — something like:
ChatGPT for drafting
One AI video tool (often Synthesia)
A few AI features inside authoring tools
In 2025, we’re seeing the emergence of composable AI stacks:
General-purpose AI (ChatGPT, Claude, Gemini, Microsoft Copilot, etc.) as the backbone
L&D-specific tools (Synthesia, Articulate AI, HeyGen, Descript, etc.) layered on top for specialised production
Embedded AI inside LMSs, LXPs, authoring tools and productivity suites
Internal / proprietary AI trained on organisational content (custom copilots, private LLMs, custom GPTs)
The pattern is clear: teams are becoming multi-tool, multi-model and expect AI capabilities to be embedded in the platforms they already use, not bolted on from the outside.
With that comes a new set of questions:
Which tools are approved for what?
Where does sensitive data go (and not go)?
How do we avoid every team building their own “shadow AI” stack?
So alongside the excitement, there’s rising governance pressure: as stacks grow, so does the need for architecture, standards and guidelines.
Finding #3: AI Use in L&D Is Maturing (But Not Evenly)
Using my six-stage AI maturity model, we see a noticeable shift up the curve compared to last year.
Very roughly (and simplifying a bit for this sneak peek):
Stage 1 – Asset Acceleration:
Individuals use AI to draft, translate and tidy things up faster.Stage 2 – Workflow Integration:
AI is embedded in team workflows with standard prompts, templates and guardrails. It’s becoming “how we work.”Stage 3–4 – Data-Informed & Intelligent Automation:
AI helps decide what to build, update or retire, and begins to orchestrate more complex workflows (e.g. localisation, agents, tutoring) with human oversight.
What we’re seeing in this AI-forward sample:
Fewer teams stuck in Stage 1 (ad hoc, individual use)
A strong centre of gravity now in Stage 2 (workflow integration)
A growing minority operating in Stages 3–4, where AI is shaping decisions, not just content
A small but real Stage 5 vanguard treating AI as part of their learning infrastructure, not just a set of tools
Practically, that means moving from:
“Maria uses ChatGPT to speed up her script writing”
to
“Our whole team uses shared AI playbooks, has agreed guardrails, and AI is plugged into how we design, build and review.”
Maturity is rising — but it’s far from uniform. Many teams are still early-stage; a small group is sprinting ahead.
Finding #4: L&D Has Moved Faster Than Some, but Slower Than Others
When we compare this picture to other functions in the organisation (based on respondents’ views plus qualitative comments), an interesting pattern emerges.
L&D is generally ahead of risk-averse functions like legal and compliance, which — for obvious reasons — are still mostly in “cautious individual exploration” mode.
But L&D is behind leading functions like product, marketing, customer support and engineering, where Stage 3–4 behaviours — running rapid experiments, orchestrating multi-step workflows and delivering personalised experiences at scale — are far more common.
In these more “AI mature” functions, AI is helping to decide what gets built or shipped, rather than just how fast we ship. More mature AI use looks like:
continuously testing and iterating based on data,
automating chunks of complex workflows (e.g. routing, tagging, triage),
tailoring experiences or recommendations to specific segments or individuals in real time.
By inhabiting this middle ground, we create both opportunity and risk:
Opportunity: L&D is well positioned to model responsible, human-centred AI in learning — where pedagogy, ethics and trust are non-negotiable.
Risk: If other functions move faster on experimentation, automation and impact, L&D could lose influence just as AI becomes central to how organisations build capability.
One of the big questions for 2026 will be:
Will L&D stop at Stage 2, or progress to experiment with — and embed — more ambitious and transformative AI use?
Finding #5: The Value Story Is Diversifying
AI is clearly creating value in L&D — but the type of value depends heavily on maturity.
For most teams, value still = efficiency
Right now, the dominant gains are:
Time savings on scripting, slides, quizzes, audio, video and localisation
More content, in more formats, in more languages without proportional headcount
Relief on bottlenecks (e.g. “beating the blank page”, first-pass SME stand-in, repetitive admin)
This is real and important value — but it’s largely L&D-facing: it makes our lives easier and our outputs faster.
For more “AI mature” teams, value = learner & business impact
In the more advanced cohort, we’re starting to see AI used to improve precision and impact:
Better targeting and prioritisation:
Using AI to understand needs, spot gaps and decide which programmes to update, consolidate or retire.More meaningful personalisation:
Early examples of AI-driven pathways, role-specific recommendations and in-flow support.Stronger links to outcomes (early but promising):
Using AI to help connect learning data to performance, engagement and business metrics.
The interesting part: the underlying toolset isn’t radically different between Stage 2 and Stage 3–4 teams. The stack looks similar. What changes is how intentionally those tools are aimed at outcomes, not just outputs.
Stage 1–2 question: “How can AI help us create this faster?”
Stage 3–4 question: “Should we even be creating this, how and for whom?”
Concluding Thoughts
From my analysis of the data so far I am seeing three big takeaways:
AI use is now normal, but its depth and impact varies.
Most teams are using AI somewhere in their workflow. The interesting spread is how they’re using it — from “faster PowerPoints” to early examples of AI as part of the learning operating system.The biggest risk is shallow adoption.
The danger is not “we’re behind on AI” so much as “we’re using AI in ways that don’t materially change effectiveness, equity or impact.”The frontier is moving from speed → strategy.
Over the next 12–24 months, I expect to see a sharper divide between:teams who use AI primarily to go faster, and
teams who use AI to build smarter, more personalised, more evidence-based learning ecosystems.
This is just a sneak peek — the full report will go much deeper into:
the six-stage AI maturity model,
year-on-year shifts from 2024 → 2025,
detailed use cases across the learning lifecycle, and
practical roadmaps for moving from “faster production” to “measurable performance impact”.
In the meantime, I’m curious:
👉 Where would you put your team on that maturity curve right now?
👉 Are you mostly using AI to build faster — or also to decide what to build and for whom?
Add a comment to my related LinkedIn post here — I’d love to hear where you are on this journey.
Happy innovating!
Phil 👋
PS: Want to get hands on with AI across all six stages of the maturity model, with help from me? Apply for a place on my bootcamp.


