Hey folks! 👋
This week, I had the privilege of addressing a group of CLOs and their teams at a well-known Fortune 500. My brief was to help them to explore the impact of AI from 2023 to present day, with the goal of developing a shared understanding of how to make best use of AI in L&D this year and beyond.
I decided to start the session with what I consider to be a mind-blowing fact: the number of L&D professionals using AI in 2023 was 5%, but by 2024 it was 84% with 57% of L&D professionals using AI in their work daily.
On the surface, this looks like both an unprecedented and deeply disruptive shift in the way our industry operates. But… as I shared with the group, this apparently dramatic shift tells only part of the story - and masks what I've come to call "The Great Illusion” of AI’s impact in L&D.
In this week’s blog I’ll share what we explored and provide some insights on how we might progress from adoption of to impact with AI in L&D in 2025.
Let’s go!
The Stealth Years: 2023's Secret Cyborgs
2023 was marked by what I call "grassroots adoption by stealth." While large corporations were penning complex policies banning AI use over fears of compromising IP, something fascinating was happening on the ground: L&D professionals were quietly becoming "secret cyborgs."
According to Fishbowl's early 2023 study, a whopping 68% of knowledge workers who used AI didn't tell their bosses about it. In my own research with L&D teams, I found similar patterns - around 70% were using tools like ChatGPT, Claude, and Co-Pilot, but keeping it under wraps.
What were they using it for? Initially, mostly functional tasks:
Writing emails
Summarising content
Creating module titles
Basic creative tasks like generating activity ideas
But by late 2023, as my own and other research confirmed, L&D users gained confidence and began using generic AI tools to tackle more specialised tasks like:
Needs analyses
Writing learning objectives
Designing course outlines
Creating instructional scripts
Strategic planning
The Adoption <> Impact Paradox
While many celebrate the rise of AI in L&D from late 2023-2024 as a sign of the willingness of L&D industry to innovate, in my view the shift toward using AI for more specialised L&D tasks revealed a dangerous pattern.
According to the Jagged Frontier research published by Harvard Business School in 2023, while AI tools like ChatGPT improved performance on functional tasks requiring little domain knowledge (like content summarisation and email writing), they actually decreased performance quality by 19% for more complex, domain-specific tasks that were poorly represented in AI's training data.
This created what I call "the illusion of impact" - a situation where L&D professionals speed up their workflows and feel more confident about their AI-assisted work, but in practice produce lower quality outputs than they would if they didn’t use AI.
The Jagged Frontier research showed that without necessary prompt engineering expertise and the time, skills, and knowledge to validate the quality of outputs, humans + AI were significantly less likely to provide correct or optimal solutions for specialised tasks compared to non-AI users.
So, at the end of 2023 we had a growing army of L&D professionals who, without guidance or support, were using AI for deeply specialised tasks like needs analysis, instructional design decision-making, and goal definition - tasks that generic AI tools were not optimised for. As an industry, we operated under the “AI illusion” that AI was making us better at our jobs where, in reality, the opposite was likely true.
2024: the Year AI Came Out at Work
2024 marked a significant shift - what I call "the year AI came out at work." In 2024, more and more organisations formally adopted and implemented AI in the workplace, with a 6X increase in centralised investment in AI compared with 2023.
Three key developments converged to make this happen:
Organisations increasingly recognised that covert AI usage was untenable and risky
Research like the Jagged Frontier project provided compelling evidence of AI's potential impact on knowledge workers’ productivity and quality
Enterprise-grade AI tools emerged, addressing earlier concerns about data security and IP.
We often assume that adoption ≠ impact, and 2024 was the year that AI started to impacted in a concerted and strategic way both the efficiency and effectiveness of L&D. But here's where it gets interesting: despite widespread centralised implementation and adoption of AI in L&D, impact remained limited.
A survey that I ran with Synthesia at the end of 2024 showed that, despite the adoption of AI tools, L&D teams were still grabbling with the same long term “wicked” problems that have held us back for decades. At the end of 2024, teams were still producing the same number of projects annually as they had for decades (~12). 38% of L&D teams still turn down work due to severe capacity constraints.
Why was this? Why did AI adoption not lead to impact? From my research, two factors seem to have been at play here:
Permission Without Direction While organisations granted permission to use AI tools, they provided little strategic direction on how to leverage them effectively. Creating prompt libraries helped bridge some gaps, but L&D professionals on the ground often complained about lack of training and support to enable them to use AI tools effectively. Most AI usage in L&D remained fragmentary and individual-led rather than strategically implemented at an organizational level. The nascent nature of AI technology - with far more unknowns than knowns about its potential - made formulating concrete L&D strategies and providing clear direction challenging.
The Generic Tools Problem L&D is a highly specialised function requiring specific domain knowledge and skills. Generic AI tools, while powerful, were not optimised for specialised L&D tasks like needs analyses, goal definition, and instructional design decision-making. Without specialised tools built for L&D workflows, teams struggled to achieve impact with any consistency or depth.
Breaking the Illusion: Strategic Pioneers Show the Way
While most organisations focused on implementing enterprise grade AI tools for general use in 2024, a small number of organisations stand out for taking a more strategic, hypothesis-driven “R&D” style approach.
Instead of just permitting open access to certain AI tools, these organisations first generated hypotheses related to specific goals and opportunities, and then tested the impact of AI against specific strategic goals. In doing so, they avoided the trap of simply adopting AI and hoping for the best. They instead approached AI implementation with clear business goals, specific hypotheses about how AI could help achieve those goals, and rigorous testing frameworks to measure both efficiency and effectiveness gains.
Here are three examples:
Leyton: Optimising Sales Coaching
Leyton needed to improve sales performance through better coaching
Hypothesis: By analysing client calls using Refract AI, we can assess the rate at which coaching initiatives are impacting sales employees' performance with 90% accuracy
Test: Conducted both AI and manual analyses of client calls.Results:
AI successfully measured coaching impact with over 90% accuracy
Provided targeted, actionable feedback to increase coaching effectiveness
Created data-driven insights to improve coaching impact further
HSBC: Improving the Quality & Efficiency of Call Center Training
HSBC needed to enhance call center interaction quality while reducing training costs
Hypothesis: Using AI to simulate common call center scenarios could increase quality of interactions by at least 50% and reduce coaching overheads by 20%
Test: A small sample of call center staff got access to AI for daily practice of common scenarios
Results:
Quality scores increased from 88% to 98% in control group
Projected annual coaching cost reduction of ~£1 million
Demonstrated scalability of personalised practice scenarios
Rolls Royce: Improving the Quality & Efficiency of AI-Powered Technical Training
Rolls Royce needed to reduce employee training time and at the same time improve managerial oversight of employee skills
Hypothesis: An AI chatbot fine-tuned to be expert in engine protocol could reduce formal "away from desk" training hours by 20% and increase managers' awareness of team skills by 20%
Test: Internal documentation was compiled to build a custom GPT. The GPT was optimised to a) answer employee queries and b) flag safety-critical questions for manager review
Results:
35% drop in routine "away from desk" training hours without any negative impact on performance
12% faster issue resolution
Increased employee confidence in technical knowledge
Enhanced manager visibility of team skill development
Looking across these case studies, several critical success factors emerge:
Clear Strategic Alignment Each organisation started with a specific business problem rather than technology exploration. They defined clear metrics for success and established baselines before implementation.
Controlled Testing Before Roll Out AI tools broadly, each organisation ran controlled tests with specific hypotheses and measurement frameworks.
Dual Focus on Efficiency and Effectiveness Success came from improving both the speed AND quality of L&D operations. None of these organisations sacrificed effectiveness for efficiency.
Recommendations for L&D in 2025
Based on these findings, I shared two key priorities with the CLO group that I believe are crucial for any organisation looking to move beyond the adoption-impact paradox:
Taking a Structured Approach to Generic AI Success: turning adoption of AI into impact requires moving beyond ad-hoc usage to strategic implementation.
Start with clear business problems and hypotheses
Run controlled tests with measurable outcomes
Focus on both efficiency AND effectiveness gains
Build clear frameworks for when and how to use AI tools
Co-Creating Specialised L&D AI Tools: to fully realise the potential impact of AI on how we design, deliver & evaluate learning and development, we need to look beyond generic AI models and tools like enterprise ChatGPT & Co-Pilot. As we have already seen in the medical and coding industries, optimal AI usage in the workplace does not lie in trying to retrofit generic AI tools to be fit for purpose by building prompt libraries and developing. engineering skills. Instead, it lies in building purpose-built L&D solutions.
To achieve this, the industry must make a key shift that is more cultural than technological. It must reimagine itself not as a consumer of technology, but a creator of it. In practice this look like:
Partnering with technology teams to develop specialised tools
Focusing on L&D-specific workflows and outcomes
Building in pedagogical principles from the ground up
Prioritising learning effectiveness over simple speed gains and cost reductions.
Conclusion: The Path From Illusion to Impact
What became clear in my discussions with CLOs other other L&D professionals this week is that we're at a critical juncture. The massive adoption of AI in L&D creates an unprecedented opportunity, but realising its potential requires a fundamental shift in how we think about and implement technology as an industry.
The organisations seeing real impact aren't just using generic AI tools within existing workflows - they're reimagining how L&D can work in an AI-enabled world. They're moving beyond the simple automation of existing tasks to ask deeper questions about how AI can transform learning effectiveness and efficiency.
As I concluded in my address to the CLOs, whether it happens in 2025 or later, the next phase of AI in L&D isn't about consuming generic AI—it's about co-creating AI that is purpose-built for learning transformation. The opportunity is here. The question is: will we seize it?
I'd love to hear your thoughts and experiences. How is your organisation approaching AI implementation in L&D?
Happy experimenting! Phil 👋
PS: Want to dive deeper into strategic AI implementation in L&D? Apply for a seat on one of my upcoming AI Learning Design Bootcamps where we explore these concepts in depth and get hands-on with prototyping and impact measurement.
Are there examples of L&D professionals partnering with technology teams today?