The Course Is Dying as the Unit of Learning
Here’s why, and what’s replacing It
Hey folks 👋
For about 50 years, the course has been the default unit of corporate learning — not because it worked, but because nothing else scaled.
The learning science hasn’t changed. What’s changed is the economics. The interventions that performance consultants have been advocating since the 1990s — worked examples, job aids, feedback on real work, coaching — are now buildable, maintainable, and possible to personalise at scale for the first time.
AI is finally forcing us to ask a harder question: if the course is only a tiny part of how people actually learn at work, why is it still the centre of our learning stack?
We Built an Industry Around the Margins
The story the data tells is consistent and uncomfortable.
Most learning time is not formal. The classic 70:20:10 work by Lombardo and Eichinger (1996) found that people attributed roughly 10% of their development to formal learning, and 90% to informal and experiential learning — work itself, stretch assignments, peers and mentoring. Across multiple datasets, formal learning is a small minority of how people actually develop capability (Docebo, 2025).
Formal training has a transfer problem. Decades of transfer-of-training research suggest that only around 10–20% of formal training leads to sustained behaviour change on the job (Baldwin & Ford, 1988; Henao-Calad, 2024). Even inside that small formal slice, most “learning” never makes it into real work.
Assessment ≠ behaviour. Compliance programmes routinely celebrate 90%+ test scores, but real-world adherence is dramatically lower. Internal studies at large tech firms, including Salesforce, have shown cases where employees who passed compliance assessments at 90%+ still followed only about one-third of the role-critical protocols in live environments (Salesforce, 2023). The course teaches people how to pass the course. It does not reliably change what they do.
Spend is huge, skills gaps remain. The Josh Bersin Company estimates that organisations spend over $400 billion a year on corporate training, yet 74% of senior leaders say their companies still lack the skills they need to compete (Bersin, 2026). Training spend has grown. Confidence in skills has not.
Put that together and the picture is damning:
formal learning accounts for only 4–10% of how people actually learn at work;
only a fraction of what we “learn” in courses transfers into changed behaviour.
Put another way, we’ve built an entire industry and tech stack around the margins.
These are converging signals, not controlled experiments. No single study proves the course is “dying.” But when the transfer data, the time data, the spend data, and the delegation data all point the same direction, the burden of proof shifts to those arguing for the status quo.
Why AI Finally Breaks the Spell
The course has survived this mismatch for decades because it solved real problems: scalability, audit-ability, and a visible response to “we need training on X.”
AI doesn’t change the underlying learning science. What it changes is the economics and visibility of alternatives. Three shifts in particular.
Delegation: AI proves how vulnerable many courses really are. In 2025, agentic AI systems started autonomously completing asynchronous courses — clicking through content, answering quiz questions correctly, posting plausible contributions in discussion forums, earning certificates and badges. All without a human learner anywhere in the loop (Hardman, 2025a).
We always suspected that large chunks of e-learning were a tick-box exercise. AI makes that suspicion visible. Any learning experience that can be delegated wholesale to an agent probably wasn’t a robust learning experience to begin with. It was a record-keeping ritual packaged as learning.
Diagnosis: AI makes it cheap to ask “is this even a learning problem?” Historically, “we need a course on X” was accepted at face value because diagnosis was slow and expensive. You needed interviews, surveys, workshops, and political capital to push back.
Mager and Pipe’s (1997) classic model gave us a language for this: when performance is poor, the root cause is often process, tools, clarity, or consequences — not knowledge or skill. But doing that analysis properly took time, so a course got built anyway.
AI compresses that diagnostic step. It can ingest performance data, workflow data, and behavioural data to highlight where gaps really live. It can surface patterns that scream “this is a process problem, not a capability issue.” And it can do this continuously, not once a year (Brandon Hall Group, 2025). When diagnosis becomes cheap and always-on, you stop reflexively building courses for things that were never learning problems in the first place.
Diversification: the solution space has exploded. For years, L&D has known that better interventions existed: worked examples, job aids, checklists, decision tools, feedback on real work, coaching conversations. The problem was scale. Creating, maintaining, and personalising these across dozens of roles and workflows simply didn’t pencil out.
AI changes that calculus. It is now realistically possible to generate and maintain worked example libraries tailored to specific contexts, embed AI helpers inside tools that give guidance at the moment of need, orchestrate spaced nudges that resurface key knowledge right before people need it, and give each manager or individual a micro-coach focused on a specific capability.
These units were always pedagogically stronger. AI makes them economical.
What the Bleeding Edge Looks like in Practice
So what does “the new stack” actually look like when organisations lean into this? Here are four real patterns already in play.
Engineering: from engine courses to in-workflow AI coaching. Rolls-Royce piloted an internal GPT, fine-tuned on engine protocols, as a front-line assistant for engineers. Instead of sending people to more “away from desk” engine courses, they ran a controlled test: could an in-workflow AI coach cut routine formal training without hurting performance? They wrote down a clear hypothesis before they started — reduce formal training hours by 20% and increase managers’ visibility of skills by 20% — then measured what happened. The pilot delivered a 35% reduction in formal training hours, 12% faster issue resolution, and higher reported confidence, with managers getting better visibility of skill gaps as a side effect (Hardman, 2025b).
This is a critical distinction: the AI wasn’t being used to build a better course. The AI was the learning intervention — embedded in the workflow, delivering guidance at the point of need, and generating diagnostic data about where engineers actually struggled.

Product development: from courses to craft-specific agents. LinkedIn’s CPO Tomer Cohen has described a fundamental rethinking of how capability is built inside LinkedIn’s product organisation. Instead of sending builders to courses on compliance, experimentation, or analytics, they’ve built domain-specific AI agents — a trust agent, a growth agent, a research agent — each trained on carefully curated internal data (Hardman, 2025c). When a builder drafts a spec, the trust agent flags privacy risks and surfaces approaches that worked in similar past situations. When someone designs an experiment, the growth agent critiques the plan before it ships. The learning moment is inseparable from the work.
What’s telling is who owns these agents: the head of each craft is responsible for “their” agent — curating the corpus it learns from, tuning how and when it intervenes, and making sure builders understand how to work with it. This is learning design at a different level of abstraction: designing feedback loops and exemplar systems, not courses.
Compliance: from annual course to nudge systems. Case studies from providers like Skillcast, NAVEX and Disprz tell a consistent story: organisations are trading 60–90 minute annual compliance courses for nudge learning and 3–10 minute mobile micro-modules embedded into normal workflows (Skillcast, 2025; NAVEX, 2025; Disprz, 2025). In one global charity, the shift to weekly “Compliance Bite” scenarios and in-channel nudges maintained or improved audit outcomes while reducing time away from work (Skillcast, 2025). In heavily regulated UAE sectors, enterprises now meet audit requirements with microlearning and spaced reinforcement rather than day-long classrooms (Disprz, 2025). The compliance course isn’t gone — it’s been broken into smaller, more frequent, more embedded interventions that fit around the work rather than replacing it.

Enablement systems, not catalogues. In the UK Government’s 2026 DWP Copilot trial, 89% of 3,549 staff reached for self-directed exploration and peer support over the formal training that was right there waiting for them (Department for Work and Pensions, 2026). In Josh Bersin’s latest research, only about 5% of companies have built what he calls “dynamic enablement” — AI-first learning systems that surface help in the flow of work instead of sending people to a course library (Bersin, 2026). These organisations are 28× more likely to unlock employee potential, 6× more likely to exceed financial targets, and 7× more likely to achieve high productivity than peers still running static training models (Bersin, 2026). In practice, that means sales reps seeing guidance and examples in the CRM, service agents getting next-best actions in the ticketing tool, and managers seeing live enablement dashboards — not course completion rates.
So What Actually Replaces the Course?
The course doesn’t disappear. It just stops being the default unit of learning & development.

There are genuine cases where structured, sequenced instruction is still the right intervention — when learners have no existing schema for the domain, when complex interdependencies need to be understood before any part can be practised in isolation, or when the learning goal is identity change rather than task performance. Leadership transitions, new manager programmes, deep technical reskilling, culture and values work — these can still benefit from the kind of sustained, high-touch experience that a well-designed course provides.
But those cases are the minority, not the default. At the bleeding edge, the emerging stack looks something like this:
Worked examples in the workflow (~30%). Concrete, contextual examples surfaced at the point of need. Scenario libraries that show “what good looks like” for this product, this customer, this tool. Not examples in a course module — examples inside the template, the CRM, the ticketing system, the workflow where the work actually happens.
Feedback on real work (~30%). Human and AI feedback on live artefacts. Structured debriefs, coaching conversations and reflective prompts tied to actual outcomes — not simulated exercises. Managers getting AI feedback on performance reviews before they hit the employee. Sales reps getting critique on proposals. Knowledge workers getting assistance on strategy docs. The unit of learning shifts from content about feedback to feedback on real work.
Job aids and decision support (~30%). Checklists, prompts, calculators, templates and AI assistants integrated into tools. Systems that reduce cognitive load at the moment of performance. AI makes it trivial to generate these from existing policy documents, past incidents, or expert interviews — and to adapt them for different roles and contexts.

Human-led development (~10%). High-touch, high-support interventions where they matter most: leadership, complex judgement, identity shifts, cohort-based learning for culture change. This is where courses, facilitated workshops, coaching programmes, and group learning experiences live — not as the default for every request, but as the intentional choice for problems that genuinely require them.
How to Start the Transition (with tools you already have)
Right now, ~95% of L&D teams are using AI to build courses faster (Bersin, 2026). To join the ~5%, you don’t need a new platform — you need different experiments.
Here’s a concrete roadmap you can run with nothing more exotic than an enterprise GPT or a Copilot-style agent.
Step 1: Pick one high-friction workflow, not a topic. Not “we need training on feedback” — instead, “new support agents resolving Tier-2 incidents” or “sales managers writing performance reviews” or “front-line leaders applying the new escalation policy.” Define one or two business metrics you care about locally: time to resolution, error rate, rework volume, satisfaction scores.
Step 2: Design an A/B pilot. Group A gets your current best course or programme. Group B gets a minimal AI-enabled stack: a custom GPT trained on your internal docs, a set of worked examples and checklists embedded where the work happens, and a short nudge sequence over 4–6 weeks. Limit the pilot to one team or region. Run it for 4–8 weeks.
Step 3: Instrument both arms from day one. Compare not just satisfaction scores, but time to proficiency, real error rates, rework volume, and the volume and type of queries to the AI assistant. Copy Rolls-Royce’s approach: write down a clear hypothesis before you turn anything on (Hardman, 2025b). “We believe we can cut formal training time by X% with no loss of performance on Y metric.” That’s the bar.
Step 4: Use the AI to learn, not just to teach. Mine the conversation logs (anonymised) from your enterprise GPT or agent to see what people actually struggle with, which job aids or examples get used, and where the bot escalates to humans. Feed those patterns back into process, policy and product changes — not just more learning content. This is the feedback loop most teams miss: the AI isn’t just delivering support, it’s generating diagnostic data about where your workflows are actually broken.
Step 5: Scale only what wins. If the AI-enabled condition beats or matches the course on real performance with less formal time, freeze new course development in that area and shift design effort into improving the in-flow stack. If it doesn’t, treat that as a signal that you’re in the 10–20% of cases where a genuine, high-quality course is still the right unit — and invest in making that course excellent.
A Note on Evaluation
The most common objection to this shift is: “but how do we prove it works without completion rates?” The answer is: stop measuring inputs and start measuring outputs. Completion rates, satisfaction scores, and quiz pass rates tell you what people clicked through. They don’t tell you what changed.
The Rolls-Royce pilot didn’t track whether engineers finished anything. They tracked outputs: issue resolution time, training hours saved, and manager visibility of actual skill gaps. If your current success metric is “92% completed the course,” your first pilot should ask instead: “Did rework rates drop?” or “Did time to first independent proposal shorten?”
The shift isn’t just what you build — it’s what you count. Swapping courses for job aids and microlearning but still measuring inputs is changing the format, not the logic.
Closing Thoughts
The data has been telling us for years that formal courses are a small, leaky part of the learning picture. AI doesn’t change that story. It removes our excuses.
It makes it easy to delegate weak course designs to agents, revealing how empty they always were. It makes it cheap to diagnose whether a course was ever the right answer. And it makes it economical to build the alternatives we always knew were pedagogically stronger.
Most teams today are using AI to spin the course flywheel faster. The more interesting move — the one Rolls-Royce ran, the one LinkedIn is building into its product organisation, the one Bersin’s top 5% are scaling, the one the DWP data already supports — is to step back and redesign the stack entirely. Starting from the 90% where learning actually happens, not the 10% where we’ve been building.
The course had a good run. What comes next is likely to be smaller, sharper, and closer to the real work.
Happy Innovating!
Phil 👋
PS: If you want to to learn more about working with AI with me, you can apply for a place on my AI Bootcamp for L&D.




