Beyond the Classroom & LMS: How AI Coaching is Transforming Corporate Learning
What a new HBR study tells about the changing nature of workplace L&D
Hey folks đ
Thereâs a vision thatâs been teased Learning & Development for decades: a vision of closing the gap between learning and doingâof moving beyond stopping work to take a course, and instead bringing support directly into the workflow. This concept of âlearning in the flow of workâ has been imagined, explored, discussed for decades âbut never realised. Until nowâŚ?
This week, an article published Harvard Business Review provided some some compelling evidence that a long-awaited shift from âcourses to coachesâ might not just be possible, but also powerful.
In a controlled experiment with 139 employees, researchers compared two types of learning experience on skills development, specifically a skill considered to be complex and hard to teach â problem framing.
The two settings were a) traditional in-classroom workshops, led by an expert facilitator and b) AI-coaching, delivered in the flow of work. The results were compellingâŚ.
For anyone in L&D, the results numbers are more than just interesting: they provide evidence that it might not now just be possible but also preferable and more productive to move corporate L&D out of the classroom and off the LMS and into the workflow.
In this weekâs blog post, I unpack what the study shows, analyse why AI coaching performs better than classroom-based and online learning and explore what this might mean for how we design, deliver and consume âlearningâ at work.
Letâs go!
The Experiment & Findings
In the study, the BCG Henderson Institute compared the impact of a virtual classroom (control) with a one-to-one gen-AI coach (test). Each was tasked with attempting to teach a single, complex skill â problem framing â which was chosen in part because itâs considered to be challenging to teach.
The 139 participants who took part in the experiment were drawn from BCG RISE, a re-skilling programme for mid-career professionals. Pre-test results showed that the participants had varied levels of understanding, experience and competence in problem framing, which allowed researchers to assess the effect of the two approaches for specific types of learners (e.g. novices Vs more experienced). The effect size was measured by running pre- and post-lesson problem framing competency tests, which were augmented with self-reported engagement levels and deep-dive interviews with participants.
Itâs not 100% clear what the specifics of the experiment looked like to the participants. However, a plausible approximation of the setup based on the results and on patterns that Iâm seeing in experiments on the ground, would be this:
Classroom Condition
An expert in problem framing introduces the concepts to a mixed group, and leads a session of exploration, practice and feedback on problem framing.
AI Condition
A 1:1 chat-based coach guides short attempt â feedback â revision loops based on real work artefacts (briefs, emails, slides) in the flow of work. In the process, the AI coach surfaced clarifying questions, offers alternative framings to choose from, and sends brief, timed messages to prompt retrieval and reflection at optimal interviews after specific interactions.
After testing with 139 employees, three key findings emerged:
An AI coach was able to teach complex skills to the same level as the expert instructor, but 23% faster overall.
Learners who started with the lowest scores (i.e. who had the most to learn) saw 32% larger gains achieved with an AI coach when compared to peers who learned in in the classroom.
After just one interaction, 53% of learners rated the AI coach higher than the human instructor, for three reasons:
tits ability to offer âjudgement free practiceâ
better âlearning-jobâ-fitâ
more tailored and personalised feedback.
TLDR: The evidence suggests that âlearning in the flow of workâ is not only feasible as a result of gen AIâit also show potential to be more scalable, more equitable and more efficient than traditional classroom/LMS-centred models.
Why the Coaching Model Works for the Business and Learner
The benefits of a shift from a model of âstop and learnâ humanâled courses â âalways onâ AI-led coaching are obvious for the organisation: itâs cheaper and more scalable to design and deliver, and it avoids what is the biggest cost of workplace L&D â the loss of earning when employees stop working to start learning.
But, whatâs in it for the learner / employee?
As the results from this experiment show, the short answer is that â if executed well â the shift from a course â coach model can also have positive benefits for them. Thereâs no magic here: what we see in well-executed AI coaching models is essentially the scaling of a long-tested and proven apprenticeship model of learning and development which is much better aligned with how we know about humans learn than âsit and listen/watchâ models.
Hereâs my hot take on the potential benefits of AI coaching, mapped to the science of learning:
1. Reward & Purpose
As humans, we learn best whatâs useful right now. An embedded coach ties learning to todayâs task list, real artefact and KPIs while also preserving autonomy. In this model, the motivation to develop becomes intrinsic (of value to the learner) rather than extrinsic (do this training, or elseâŚ), which in turn created the optimal conditions for learning.
Concretely, a well-designed AI coach can:
Make purpose explicit at the moment of need: by translating a task into a one-sentence outcome + metric (e.g., âDefine the problem; success = stakeholder sign-off in Fridayâs meetingâ).
Bind learning to real stakes: by showing this skill â this KPI (e.g., âSharper problem framing correlates with â20% rework on your teamâs projectsâ), it explicitly connects learning and work.
Personalise the âwhyâ: by using your role, backlog, and goals (e.g., âFor you as a PM, clearer constraints reduce cycle time, so letâs prioritise thatâ), it can both state and optimise the value of your effort.
2. Learning in Flow
Research shows that peak learning happens when we are in a state of flow, which is induced by three conditions: clear goals + immediate feedback + âright-sizedâ challenge (not too hard, not too easy). An AI coach can create these conditions by:
Clarifying the goal on demand (e.g., âState the problem in one sentence + success criteriaâ), then anchoring all feedback to it.
Sensing difficulty from a learnerâs outputs, performance and responses (hesitations, vague claims, repeated errors) and nudging the level up or down accordingly.
Giving immediate, informational feedback (âWhatâs strong / Whatâs unclear / Try this nextâ) on the exact line, slide, or step the learner is on on.
Scaffolding, then fading: offering hints â cues â exemplars early, then removing them as the learner improves, so effort stays productive.
Forcing productive choices by proposing two viable alternatives and asking the learner to justify oneâkeeping you at the edge of competence.
Pacing the reps of retrieval and application at optimal moments, so learning gains consolidate without overwhelm.

3. Spacing & Retrieval
Research shows very clearly that massed, one-off learning decays and is lost quickly. Research also shows that spaced retrievalâi.e. designing for learning over timeâlocks memories and procedures into durable knowledge and skills.
An AI coach can help operationalise this by:
Scheduling micro-retrievals: coaches can be trained to design and deliver sessions of 1â2 minutes long at successive intervals (e.g., same day â +2 days â +7 days) and tied to the calendar events that matter to the learner.
Varied retrieval: coaches can be trained to require recall (e.g. through short explain-it-back activities) to strengthen multiple retrieval routes and turn basic remembering and recognition into substantive learning.
Linking practice to real-world tasks: coaches can make reviews purposeful by connecting retrieval and practice to real tasks, e.g. âBefore tomorrowâs briefing, restate the constraints you captured last timeâ.
TLDR: the always on coach is an effective model for human learning and development because a) it makes learning relevant and timely and b) it injects the right kind of frictionâretrieval, feedback, calibrated challengeâexactly when it helps.
AI Coaching in Practice
So what might does this all mean in practice, both for the employee / learner and the L&D team? Here are some initial thoughts, based on this research and what I am seeing on the ground in the workplace.
For Employees / Learners
AI coaching feels less like âtrainingâ and more like an intelligent assistant helping you do your job better. As BCG research above suggests, the response from learners and employees to this is so far positive.
Hereâs how Iâve seen it working in practice:
Most learning happens in workflow tools, not in a classroom or on an LMS . The tutor lives in Slack/Teams, Jira and your CRMâsurfacing help where you already work, in the flow of work.
Prompts are timely and relevant. For example, when you start a new brief it might ask: âLooks like youâre framing a new problem. Want to use the 5 Ws template to ensure clarity?â
Practice is on-demand and safe, learner-led and org-required. Before a client call, you might request: âRun a 5-minute simulation on handling budget objections.â You might also receive a âtop downâ practice request from the org to help drive progress towards a priority target. In both cases, you can learn on the job with support and practice, fail and retry with zero judgement.
Feedback is instant and targeted. Paste a draft email or PPT deck and ask your AI coach to help you optimise it for a specific client or call.
Reinforcement is automatic. A day or two later, receive a brief, timed scenario could resurface in chat to lock in the skill.
For Instructional Designers / L&D Team Members
For the people who design and deliver training in the workplace, the centre of gravity shifts from building courses and content (events) to building performance architecture (ecosystems).
In practice, this might mean:
Designing triggers, not content. The role of the instructional designer is to define the workflow moments where AI should (and should not) intervene for specific roles, mapped to specific goals (e.g., âWhen a ticket is âescalatedâ, prompt three de-escalation moves.â).
Create micro-content & simulations. L&D teams will be responsible for building the blocks that AI uses: retrieval banks, role-play parameters, feedback rubrics. The will also likely be responsible for storing these as versioned, modular components so they can be A/B tested and reassembled quickly to optimise the ecosystem.
Be an analyst & experimenter. Track usage and outcome deltas; A/B interventions; scale what moves KPIs.
Define quality standards for âoptimal performance.â L&D will define AIâs âtarget state âthrough the creation of skill rubrics, acceptance criteria, and definition-of-done for core tasks within key roles. They will also convert these standards into machine-readable checks the AI can reference when giving feedback, so guidance is both consistent and auditable.
Orchestrate the human connection. Use AI data to target the 10% workshops at the real sticking points.
TLDR: For L&D, the job shifts from building programmes to building AI products. The day to day work shifts from analysing needs and building content to defining workflow triggers, setting clear quality standards for âoptimal AI performanceâ and monitoring data to track and fine-tune the system for optimal impact on KPIs.
Conclusion: toward a 90:10 operating model of workplace L&D?
So what does all of this mean in practice? Right now, the vast majority of workplaces remain tethered to legacy technologies and long-term operating procedures that continue to centre the course and leave little room for the coach. But there are signs that change might be on its way.
On the ground, across the Fortune 500s Iâm working with, thereâs genuine appetiteâfrom both business and employeesâto test new models of L&D, including AI-powered coaching. Powered by the energy and potential of AI, L&D leaders are imagining a new 90:10 model, where 90% of training is delivered via AI coaching, and 10% via in person, high touch contact time.
The visions I am seeing emerge among L&D leaders on the ground are overwhelmingly âAI-firstâ and designed to make learning invisible â i.e. to re-couple learning and work and re-conceive development as a process of always-on performance support in the flow of work, rather than stop-and-listen events.
As the HBR/BCG study shows, this could be a good thing if we build L&D systems that serve both sides of the equation: measurable business impact and meaningful learner benefit, complete with autonomy, psychological safety and meaningful skill growth.
If youâre exploring this shift, my advice is this: design for the mechanisms that actually make learning stick, set strict quality standards, start small and keep trust front and centre (e.g. via opt-in systems and investing heavily in the humans-in-the-loop). Thatâs how âlearning in the flow of workâ moves from being a fast track to hitting efficiency KPIs to actually enabling sustained and improved performance, both for the business and the learner.
Happy experimenting!
Phil đ
PS: Want to explore the impact of AI on your day to day work with me and a group of fellow L&D folks / Instructional Designers? Apply for a place on my AI & Learning Design Bootcamp.
PPS: Join the conversation about this post on LinkedIn here.



Hey Phil what coaching tool did they use in the experiment. I am finding AI coaching tools a minefield as to what constitutes a coaching tool? as in support tool or a full coaching tool? What is your take on this?