How Humans Do (and Don't) Learn
One of the biggest ever reviews of human behaviour change has been published, with some eye-opening implications for how we design & deliver learning experiences
This month, researchers from the University of Pennsylvania published one of the biggest ever reviews of behaviour change efforts - i.e. interventions which do (and don’t) lead to behavioural change in humans.
Researchers found that, across a number of domains, there are clear patterns in the the types of interventions that are required to change both individual and collective behaviour. The results are pretty eye opening….
TLDR: The interventions which we use most frequently to drive knowledge gain and behaviour change, such as video + quiz approaches, lectures and generic skills training, have a negligible impact on measurable changes in human knowledge and behaviour.
In short, traditional methods of teaching and learning are overrated and ineffective. The billion-dollar question is, of course: what’s a more effective alternative?
Behavioural Science & Learning Design
Despite its proven effectiveness, the science of behavioural change is still underutilised in both corporate L&D programs and formal education settings.
The science of behavioural change explores the mechanisms behind how habits form, how they can be modified, and how they impact human behaviour. Key researchers in this space like Wendy Wood, Bas Verplanken, and Benjamin Gardner have repeatedly shown that human habits are formed not through the presentation of new information but through repetition in stable, unchanging contexts where cues trigger automatic behaviours without conscious intent.
For learning professionals, this behavioural research confirms conclusively the importance of shifting away from focusing on the design of content to a focus on the design of context.
In practice, the research suggests means two major habit changes for us learning professionals:
1. Shifting from Content Design to Context Design
First, we must shift away from our collective habit of asking, “What does the learner need to know?” and ask instead:
What is the habit that we want our learners to form?
How do we build an environment in which we can encourage, normalise and automate this habit?
2. Shifting from Short Term to Long Term Design
Ww must also shift away from the concept of the “one off learning event”. Research shows that programs ~12 weeks with weekly sessions have been found effective in initiating behaviour change.
However, more complex and sustained change often requires longer interventions. For example, programs of ~six months with weekly interventions are recommended for guaranteed behaviour change and impact.
As well as the importance of environment-focused methods, the research also emphasises the critical role of "behavioural nudges”. The most effective strategies cited in the research include:
Material Incentives: providing tangible rewards for desired behaviours.
Social Incentives: providing social recognition for desired behaviours.
Behavioural Incentives: ensuring that the required behaviour is effortless, e.g. by providing easy access to the resources and tools necessary for the desired behaviour.
Implications for Learning Design & Delivery
So, what does this mean in practice for how we design and deliver learning experiences?
I ran an analysis of what the data might mean for how we design and deliver common workplace trainings.
My findings go as follows:
Effect Size Scale:
<1.44: Negligible effect
1.44-2.47: Small effect
2.48-4.26: Medium effect
>4.26: Large effect
Key Take Aways
The data highlights two critical insights for instructional designers aiming to enhance their impact on learners:
1.The Importance of Practical Application
Traditional methods, such as online or in-person webinars focusing solely on theoretical knowledge, have minimal impact (effect size 0.04). In contrast, providing curated learning resources and opportunities for practical application significantly improves effectiveness (effect size 3.70).
General training without practical tasks yields poor results (effect size 0.21). Implementing peer mentoring programs and regular check-in meetings dramatically improve new hires' adaptation (effect size 2.65). The most effective solutions involve comprehensive onboarding programs with continuous support (effect size 4.30).
Hands-on webinars followed by regular nudges and dynamic sales enablement programs provide significant improvements in measurable behaviour change (effect size up to 4.50).
2.The Importance of Long Term Training & Support
One-off reviews and goal-setting sessions are largely ineffective (effect size 0.10). However, setting specific development goals and providing feedback and structured monitoring markedly enhance performance (effect size 2.20 to 4.50).
Generic announcements and one-off events are minimally effective (effect size 0.05). Live sessions led by peers and sustained initiatives with anonymous feedback loops foster better inclusivity (effect size 2.58 to 4.50).
Simple informational sessions are not very impactful (effect size 1.04). Detailed feedback and regular follow-ups are crucial for enhancing compliance and employee performance (effect size 2.62).
Interventions like regular team meetings for project progress reviews are much more effective than one-off seminars (effect size 2.53 vs. 0.10). Embedding continuous performance reviews ensures timely improvements and better outcomes (effect size 4.50).
These findings will likely not come as no surprise to many in the world of learning and development. Ever since Bloom’s 2-Sigma research back in the ‘80s we have appreciated the value of personalised, high-support learning experiences.
What’s different here is the level of specificity and clarity we are starting to get about what great L&D interventions look like. That said, the challenge remains being able to design and deliver these interventions. As Bloom himself said, our biggest challenge is to "find methods of group instruction as effective as one-to-one tutoring.”
AI & the Optimisation of L&D
So, will AI change this? Is this the moment that, finally, we are able to design and deliver a model of workplace training which is capable of driving real, measurable changes in human capability based on robust research like that published last week?
If you’d to experiment, here are some examples for how to use ChatGPT 4o to experiment with the design and delivery of some of the most effective L&D interventions based on data from the research paper discussed above:
Personalised Projects
Use ChatGPT to develop customised learning paths and practical tasks for employees. For example, input job descriptions and required skills into ChatGPT and request it to generate tailored learning objectives, content, practical exercises & feedback.
Example Prompt: “You’re an expert instructional designer who works in corporate L&D. Using the employee profile and resources provided, you must generate a set of learning objectives and practical exercises, each with curated content relevant to the task and learner. The personalised learning plan must enable the employee described to progress from [start point] to [goal], using the optimal instructional strategy for this task. You must also script succinct and actionable feedback text for great, good and poor performance.”
Mentoring Programs
Use ChatGPT to pair up employees and design and schedule peer mentoring sessions and check-in meeting agendas. ChatGPT can provide discussion points and follow-up actions to ensure continuous support.
Example Prompt: “You’re an expert instructional designer who works in corporate L&D and specialises in designing evidence-based peer mentoring programmes. You must create a 3-month peer mentoring program for X new hire, whose role is to XYZ. This must include bi-weekly check-in meeting agendas with key discussion points and follow-up actions.”
Monitoring & Performance
Use ChatGPT to analyse learning and performance data and generate detailed feedback reports. ChatGPT can schedule regular follow-ups to ensure continuous improvement.
Example Prompt: “You’re an expert instructional designer and data analyst who works in corporate L&D and specialises in impact evaluation. You must analyse the following compliance training results and generate a detailed feedback report for each participant, including actionable improvement steps and a follow-up schedule. Suggest a scoring system for monitoring if and how well learning is being applied.”
Concluding Thoughts
Research into human behaviour change suggests that, in order to impact capability in real, measurable terms, we need to rethink how we typically design and deliver training.
The interventions which we use most frequently to behaviour change - such as video + quiz approaches and one off workshops - have a negligible impact on measurable changes in human behaviour.
For learning professionals who want to change how their learners think and behave, this research shows conclusively the central importance of:
Shifting attention away from the design of content to the design of context.
Delivering sustained cycles of contextualised practice, support & feedback.
In many ways, the biggest challenge we face is not understanding what interventions to design to optimise impact on employees and business goals but how to deliver those interventions well at scale.
AI may provide a solution to the wicked problem of delivering optimal learning experiences, but only if we build it intentionally with this goal in mind.
Happy experimenting!
Phil 👋
PS: If you want to be part of a thriving community of AI + learning design folks, join me and my amazing community over on LinkedIn.
PPS: If want to dive into the world of AI & learning with me and a community of innovators from the world of education, check out my 3 week online bootcamp.