Beyond the Classroom & LMS: How AI Coaching is Transforming Corporate Learning
What a new HBR study tells about the changing nature of workplace L&D
Hey folks 👋
There’s a vision that’s been teased Learning & Development for decades: a vision of closing the gap between learning and doing—of moving beyond stopping work to take a course, and instead bringing support directly into the workflow. This concept of “learning in the flow of work” has been imagined, explored, discussed for decades —but never realised. Until now…?
This week, an article published Harvard Business Review provided some some compelling evidence that a long-awaited shift from “courses to coaches” might not just be possible, but also powerful.
In a controlled experiment with 139 employees, researchers compared two types of learning experience on skills development, specifically a skill considered to be complex and hard to teach — problem framing.
The two settings were a) traditional in-classroom workshops, led by an expert facilitator and b) AI-coaching, delivered in the flow of work. The results were compelling….
For anyone in L&D, the results numbers are more than just interesting: they provide evidence that it might not now just be possible but also preferable and more productive to move corporate L&D out of the classroom and off the LMS and into the workflow.
In this week’s blog post, I unpack what the study shows, analyse why AI coaching performs better than classroom-based and online learning and explore what this might mean for how we design, deliver and consume “learning” at work.
Let’s go!
The Experiment & Findings
In the study, the BCG Henderson Institute compared the impact of a virtual classroom (control) with a one-to-one gen-AI coach (test). Each was tasked with attempting to teach a single, complex skill — problem framing — which was chosen in part because it’s considered to be challenging to teach.
The 139 participants who took part in the experiment were drawn from BCG RISE, a re-skilling programme for mid-career professionals. Pre-test results showed that the participants had varied levels of understanding, experience and competence in problem framing, which allowed researchers to assess the effect of the two approaches for specific types of learners (e.g. novices Vs more experienced). The effect size was measured by running pre- and post-lesson problem framing competency tests, which were augmented with self-reported engagement levels and deep-dive interviews with participants.
It’s not 100% clear what the specifics of the experiment looked like to the participants. However, a plausible approximation of the setup based on the results and on patterns that I’m seeing in experiments on the ground, would be this:
Classroom Condition
An expert in problem framing introduces the concepts to a mixed group, and leads a session of exploration, practice and feedback on problem framing.
AI Condition
A 1:1 chat-based coach guides short attempt → feedback → revision loops based on real work artefacts (briefs, emails, slides) in the flow of work. In the process, the AI coach surfaced clarifying questions, offers alternative framings to choose from, and sends brief, timed messages to prompt retrieval and reflection at optimal interviews after specific interactions.
After testing with 139 employees, three key findings emerged:
An AI coach was able to teach complex skills to the same level as the expert instructor, but 23% faster overall.
Learners who started with the lowest scores (i.e. who had the most to learn) saw 32% larger gains achieved with an AI coach when compared to peers who learned in in the classroom.
After just one interaction, 53% of learners rated the AI coach higher than the human instructor, for three reasons:
tits ability to offer “judgement free practice”
better “learning-job’-fit”
more tailored and personalised feedback.
TLDR: The evidence suggests that “learning in the flow of work” is not only feasible as a result of gen AI—it also show potential to be more scalable, more equitable and more efficient than traditional classroom/LMS-centred models.
Why the Coaching Model Works for the Business and Learner
The benefits of a shift from a model of “stop and learn” human—led courses → “always on” AI-led coaching are obvious for the organisation: it’s cheaper and more scalable to design and deliver, and it avoids what is the biggest cost of workplace L&D — the loss of earning when employees stop working to start learning.
But, what’s in it for the learner / employee?
As the results from this experiment show, the short answer is that — if executed well — the shift from a course → coach model can also have positive benefits for them. There’s no magic here: what we see in well-executed AI coaching models is essentially the scaling of a long-tested and proven apprenticeship model of learning and development which is much better aligned with how we know about humans learn than “sit and listen/watch” models.
Here’s my hot take on the potential benefits of AI coaching, mapped to the science of learning:
1. Reward & Purpose
As humans, we learn best what’s useful right now. An embedded coach ties learning to today’s task list, real artefact and KPIs while also preserving autonomy. In this model, the motivation to develop becomes intrinsic (of value to the learner) rather than extrinsic (do this training, or else…), which in turn created the optimal conditions for learning.
Concretely, a well-designed AI coach can:
Make purpose explicit at the moment of need: by translating a task into a one-sentence outcome + metric (e.g., “Define the problem; success = stakeholder sign-off in Friday’s meeting”).
Bind learning to real stakes: by showing this skill → this KPI (e.g., “Sharper problem framing correlates with −20% rework on your team’s projects”), it explicitly connects learning and work.
Personalise the “why”: by using your role, backlog, and goals (e.g., “For you as a PM, clearer constraints reduce cycle time, so let’s prioritise that”), it can both state and optimise the value of your effort.
2. Learning in Flow
Research shows that peak learning happens when we are in a state of flow, which is induced by three conditions: clear goals + immediate feedback + “right-sized” challenge (not too hard, not too easy). An AI coach can create these conditions by:
Clarifying the goal on demand (e.g., “State the problem in one sentence + success criteria”), then anchoring all feedback to it.
Sensing difficulty from a learner’s outputs, performance and responses (hesitations, vague claims, repeated errors) and nudging the level up or down accordingly.
Giving immediate, informational feedback (“What’s strong / What’s unclear / Try this next”) on the exact line, slide, or step the learner is on on.
Scaffolding, then fading: offering hints → cues → exemplars early, then removing them as the learner improves, so effort stays productive.
Forcing productive choices by proposing two viable alternatives and asking the learner to justify one—keeping you at the edge of competence.
Pacing the reps of retrieval and application at optimal moments, so learning gains consolidate without overwhelm.

3. Spacing & Retrieval
Research shows very clearly that massed, one-off learning decays and is lost quickly. Research also shows that spaced retrieval—i.e. designing for learning over time—locks memories and procedures into durable knowledge and skills.
An AI coach can help operationalise this by:
Scheduling micro-retrievals: coaches can be trained to design and deliver sessions of 1–2 minutes long at successive intervals (e.g., same day → +2 days → +7 days) and tied to the calendar events that matter to the learner.
Varied retrieval: coaches can be trained to require recall (e.g. through short explain-it-back activities) to strengthen multiple retrieval routes and turn basic remembering and recognition into substantive learning.
Linking practice to real-world tasks: coaches can make reviews purposeful by connecting retrieval and practice to real tasks, e.g. “Before tomorrow’s briefing, restate the constraints you captured last time”.
TLDR: the always on coach is an effective model for human learning and development because a) it makes learning relevant and timely and b) it injects the right kind of friction—retrieval, feedback, calibrated challenge—exactly when it helps.
AI Coaching in Practice
So what might does this all mean in practice, both for the employee / learner and the L&D team? Here are some initial thoughts, based on this research and what I am seeing on the ground in the workplace.
For Employees / Learners
AI coaching feels less like “training” and more like an intelligent assistant helping you do your job better. As BCG research above suggests, the response from learners and employees to this is so far positive.
Here’s how I’ve seen it working in practice:
Most learning happens in workflow tools, not in a classroom or on an LMS . The tutor lives in Slack/Teams, Jira and your CRM—surfacing help where you already work, in the flow of work.
Prompts are timely and relevant. For example, when you start a new brief it might ask: “Looks like you’re framing a new problem. Want to use the 5 Ws template to ensure clarity?”
Practice is on-demand and safe, learner-led and org-required. Before a client call, you might request: “Run a 5-minute simulation on handling budget objections.” You might also receive a “top down” practice request from the org to help drive progress towards a priority target. In both cases, you can learn on the job with support and practice, fail and retry with zero judgement.
Feedback is instant and targeted. Paste a draft email or PPT deck and ask your AI coach to help you optimise it for a specific client or call.
Reinforcement is automatic. A day or two later, receive a brief, timed scenario could resurface in chat to lock in the skill.
For Instructional Designers / L&D Team Members
For the people who design and deliver training in the workplace, the centre of gravity shifts from building courses and content (events) to building performance architecture (ecosystems).
In practice, this might mean:
Designing triggers, not content. The role of the instructional designer is to define the workflow moments where AI should (and should not) intervene for specific roles, mapped to specific goals (e.g., “When a ticket is ‘escalated’, prompt three de-escalation moves.”).
Create micro-content & simulations. L&D teams will be responsible for building the blocks that AI uses: retrieval banks, role-play parameters, feedback rubrics. The will also likely be responsible for storing these as versioned, modular components so they can be A/B tested and reassembled quickly to optimise the ecosystem.
Be an analyst & experimenter. Track usage and outcome deltas; A/B interventions; scale what moves KPIs.
Define quality standards for “optimal performance.” L&D will define AI’s “target state “through the creation of skill rubrics, acceptance criteria, and definition-of-done for core tasks within key roles. They will also convert these standards into machine-readable checks the AI can reference when giving feedback, so guidance is both consistent and auditable.
Orchestrate the human connection. Use AI data to target the 10% workshops at the real sticking points.
TLDR: For L&D, the job shifts from building programmes to building AI products. The day to day work shifts from analysing needs and building content to defining workflow triggers, setting clear quality standards for “optimal AI performance” and monitoring data to track and fine-tune the system for optimal impact on KPIs.
Conclusion: toward a 90:10 operating model of workplace L&D?
So what does all of this mean in practice? Right now, the vast majority of workplaces remain tethered to legacy technologies and long-term operating procedures that continue to centre the course and leave little room for the coach. But there are signs that change might be on its way.
On the ground, across the Fortune 500s I’m working with, there’s genuine appetite—from both business and employees—to test new models of L&D, including AI-powered coaching. Powered by the energy and potential of AI, L&D leaders are imagining a new 90:10 model, where 90% of training is delivered via AI coaching, and 10% via in person, high touch contact time.
The visions I am seeing emerge among L&D leaders on the ground are overwhelmingly “AI-first” and designed to make learning invisible — i.e. to re-couple learning and work and re-conceive development as a process of always-on performance support in the flow of work, rather than stop-and-listen events.
As the HBR/BCG study shows, this could be a good thing if we build L&D systems that serve both sides of the equation: measurable business impact and meaningful learner benefit, complete with autonomy, psychological safety and meaningful skill growth.
If you’re exploring this shift, my advice is this: design for the mechanisms that actually make learning stick, set strict quality standards, start small and keep trust front and centre (e.g. via opt-in systems and investing heavily in the humans-in-the-loop). That’s how “learning in the flow of work” moves from being a fast track to hitting efficiency KPIs to actually enabling sustained and improved performance, both for the business and the learner.
Happy experimenting!
Phil 👋
PS: Want to explore the impact of AI on your day to day work with me and a group of fellow L&D folks / Instructional Designers? Apply for a place on my AI & Learning Design Bootcamp.
PPS: Join the conversation about this post on LinkedIn here.



Hey Phil what coaching tool did they use in the experiment. I am finding AI coaching tools a minefield as to what constitutes a coaching tool? as in support tool or a full coaching tool? What is your take on this?