Hey folks and happy new year! 👋
As we kick off 2025, I've been thinking a lot about how we might use the AI tools we have in our toolkit to do more than just generate content or automate grading. As I’ve said before (and will say again!) the real opportunity of AI lies not in doing what we already do but faster and cheaper; it lies in scaling high-impact pedagogical approaches that have traditionally been difficult or impossible to implement broadly.
There are a tonne of practices that we know work more effectively than content followed by knowledge checks but are resource-intensive. Perhaps the most well-known and most celebrated approach to teaching and learning is the vision of 1:1 Socratic coaching that guides students through complex problem-solving, one-on-one tutoring that adapts to individual learning paths, hands-on guided projects with immediate expert feedback.
The vision of the “teacher for every student” has become pervasive in Western culture and, perhaps inevitably, is the focus of a lot of experimentation with AI in the education space — think: Socratic by Google, Khanmigo by Khan Academy and a new generation of AI tutors which is growing literally every week.
The question I’ve been exploring is, what alternative pedagogical approaches which we know are powerful might AI help us to design & deliver at scale? You may have seen my recent experiment with audio-based guided projects, co-designed and built with AI, for example.
But my focus has been on a particularly powerful pedagogical approach that has proven especially challenging to scale: Teaching Others. In this week's blog post I'll give a TLDR on the concept of Teaching Others and, inspired by a recent experiment by Chen et al (Dec, 2024) explore if and how AI might enable us to scale this powerful pedagogical approach.
Let's go!
Teaching Others, aka Learning by Teaching: a TLDR
As its name suggests, Teaching Others is a pedagogical approach where students teach concepts to others as a way of learning those concepts themselves. The reason that the approach is so powerful is because students know they'll need to teach something, they engage with the material differently from the start: they look for connections, focus on implications, anticipate questions, and try to find the clearest way to explain things. It's basically active learning on steroids.
The research on the value of teaching Others is pretty compelling. Back in 1984, Benware and Deci showed that just the expectation of potentially teaching something back makes students engage more deeply with material than if they're studying for a test. This makes sense: there's something about knowing you'll need to explain something to someone else that makes you think about it more carefully.
Here's a summary of why Teaching Others works so well:
First, teaching forces you to get really clear on what you know (and what you don't). Ever tried explaining something you thought you understood, only to realise you're not quite as solid on it as you thought? That moment of "oh wait, I need to figure this out better" – that's where the deep learning happens. This is implicit when teaching others.
Second, as Zimmerman showed in his 2002 research, teaching naturally builds what we call self-regulated learning skills. Teaching someone something requires you to have a plan how you'll explain something, monitor whether your student is getting it, and reflect on what's working and what isn't. These are exactly the skills we want students to develop for lifelong learning.
Third, when you teach others you get immediate feedback on your understanding. Okita and Schwartz's 2013 study showed very clearly that feedback is a key component of the pedagogical value of teaching others. When your student looks confused or asks a question you hadn't anticipated, that's instant feedback that helps you refine your own understanding.
Teaching Others: a very short history
Teaching Others has shown up in education for a long time in several ways. The most common is through peer tutoring, where stronger students help their classmates. As Cohen, Kulik, and Kulik found back in 1982, in classrooms tutors often learn more than the students they're helping, so it stands to reason that those who “act as teachers” are proven to learn more than their student counterparts.
Then there's the Jigsaw Method, which you may well have experimented with yourself. In this method, students become expert in one part of a topic and then have to teach it to their peers while learning other parts from them. It's like putting together a puzzle where everyone holds some of the pieces and its proven to be a powerful strategy for accelerating the acquisition of new knowledge.
So Teaching Others is a powerful strategy for learning, but it’s also hard to deliver and to scale. Finding enough tutoring partners, ensuring quality interactions between students, managing time – it's a big challenge.
So, can AI help? The answer is, perhaps.
AI-Powered Students: a glimpse of what's possible
A team led by Chen just published some fascinating research (December 2024) where they tried something new: using ChatGPT as a teachable agent in programming education. Instead of having ChaGPT teach students, they flipped it and the students taught ChatGPT.
Chen et al’s work builds on other experimentation with AI and Teaching Others. In the early 2000s, for example, researchers at Vanderbilt University developed something called Betty's Brain. It was a computer program where students taught a virtual character about science by building concept maps.
Betty's Brain was groundbreaking because it provided key insights into the specific instructional structure and specific pace and type of feedback that’s required to get value from the process of teaching others. Perhaps mot importantly, it confirmed that students can learn more by "teaching" virtual agents - i.e. the recipient of their teaching did not have to be another human.
Thus, in the study the students who taught a virtual agent:
Made significant learning gains - i.e. they learned more than those who did not have to teach the agent
Developed their self-regulated learning skills - i.e. they improved their ability to plan, monitor, and evaluate their own learning processes.
Back in the early 2000s, the challenge remained implementing this sort of tech at scale. So, is the rise of Generative AI the moment that we scale Teaching Others? This is the question asked by Chen et. al in late 2024, and here’s what they learned.
ChatGPT as a Student: findings from Chen at. al
Chen et al’s experiment was pretty simple: they took 41 university students and split them into two groups. One group learned programming the traditional way - watching instructional videos and practicing independently. The other group had to teach programming concepts to ChatGPT, specifically they had to teach the "Eight Queens" puzzle.
The experimental setup is particularly interesting because they didn't just have students explain concepts to ChatGPT - the team structured the interaction to mirror real teaching scenarios. This meant students had to:
Assess ChatGPT's current understanding
Explain concepts clearly
Respond to questions
Guide ChatGPT through problem-solving steps
Check ChatGPT’s comprehension and give it feedback until it achieved the learning goal
What Worked: ChatGPT as a student
The results? In some ways, ChatGPT proved to be a surprisingly effective teacher by playing the part of a "student" effectively:
Knowledge Gains: Students who taught ChatGPT showed significantly better understanding of programming concepts compared to the control group who learned via traditional methods. TLDR: the process of explaining algorithms to the AI helped them develop deeper conceptual understanding.
Skills Gains: Interestingly, students who taught ChatGPT also wrote clearer, more readable code. TLDR: the experience of having to explain their code to an AI "student" seemed to make them more conscious of code clarity and structure and more able to apply this in the real world.
Self-Regulated Learning: The students teaching ChatGPT also developed better self-regulation skills. TLDR: when compared with the control group, they were significantly better at planning their explanations, monitoring understanding, and reflecting on their teaching approach.
The Limitations: where ChatGPT fell short
Chen et al’s study also revealed some significant limitations in student interactions with ChatGPT:
Too Smart: ChatGPT often demonstrated an “unnaturally quick” understanding and produced suspiciously perfect code. In this context, AI’s ability to adapt and perform as required limited opportunities for students to practice crucial skills like debugging and error correction - key components of real teaching experiences.
Inconsistent "Personality": ChatGPT also struggled to maintain a consistent learner persona across sessions. Sometimes it would act like a complete beginner, other times like an advanced student, making it hard for student-teachers to build on previous interactions.
Limited Learning Progression: Unlike real students, ChatGPT didn't show authentic learning progression. Given how its built and trained, ChatGPT can’t genuinely build on previous knowledge or demonstrate partial understanding in realistic ways.
Building Better AI Students: what we need
Based on these findings, here's my vision for the ultimate AI student which is optimised for learning through teaching:
1. Controlled Imperfection
We need to build AI systems that can:
Maintain consistent, pedagogically useful misconceptions
Make mistakes that create valuable teaching moments
Show realistic learning curves
Demonstrate partial understanding in authentic ways
2. Learning Continuity
The ideal AI student should:
Remember previous interactions and build on them
Show progressive understanding over time
Maintain consistent knowledge levels across sessions
Reference past "learning" experiences naturally
3. Authentic Learning Behaviours
The ideal AI student would also:
Ask progressively more sophisticated questions
Show genuine confusion when appropriate
Require multiple explanations for complex concepts
Demonstrate realistic struggle with difficult ideas
4. Adaptive Difficulty
Finally, ideal AI students would also:
Adjust their "comprehension level" based on the teacher's experience
Present appropriate challenges for different teaching skills
Provide opportunities for teachers to develop specific skills
The Road Ahead, aka pedagogy-first AI
Teaching Others is one of our most powerful pedagogical approaches, and AI might be the key to scaling it effectively. That said, we need to be thoughtful about how we proceed.
For this to work, we need:
AI developers to focus on creating more authentic student-like behaviours
Researchers to study how different types of AI student behaviours impact learning
Instructional designers to experiment and share what works
The overall message here is that there is huge potential for AI to increase the quality and impact of teaching and learning. However, in order to realise that potential fully we need to build specialised “pedagogy-first” AI tools and models co-created by educationalists and engineers.
In the meantime, my advice is to start feeling some of this potential value by starting small and cautiously. Try using ChatGPT as a practice student for simple concepts. Document what works and what doesn't, and share your experiences with others.
We're at the beginning of something potentially transformative, but it'll take all of us - instructional designers, AI developers, and researchers - working together to get it right.
Happy experimenting!
Phil 👋
PS: If you want to get hands-on, hone your instructional design & AI knowledge and skills with me, apply for a place on my AI & Learning Design Bootcamp.