ChatGPT: the world's most influential teacher
New research shows that millions of us are "learning with AI" every week: what does this mean for how (and how well) humans learn?
Hey folks 👋
This week, an important piece of research landed that confirms the gravity of AI’s role in the learning process. The TLDR is that learning is now a mainstream use case for ChatGPT; around 10.2% of all ChatGPT messages (that's ~2BN messages sent by over 7 million users per week) are requests for help with learning.
The research shows that about 10.2% of all messages are tutoring/teaching, and within the “Practical Guidance” category, tutoring is 36%. “Asking” interactions are growing faster than “Doing” and are rated higher quality by users. Younger people contribute a huge share of messages, and growth is fastest in low- and middle-income countries (How People Use ChatGPT, 2025).
If AI is already acting as a global tutor, the question isn’t “will people learn with AI?”—they already are. The real question we need to ask is: what does great learning actually look like, and how should AI evolve to support it? That’s where decades of learning science help us separate “feels like learning” from “actually gaining new knowledge and skills”.
Let’s dive in.
What the Research Found
At consumer scale, usage climbed to ~18B messages per week by mid-2025, with Practical Guidance, Seeking Information, and Writing accounting for roughly three-quarters of all messages (OpenAI Economic Research, 2025).
At work, Writing dominates (about 40% of work messages), and roughly two-thirds of that is editing or transforming user-provided text rather than net-new generation (OpenAI Economic Research, 2025). Coding is smaller than many assume (≈4.2% of messages), while “social/companionship” content is a tiny fraction (OpenAI Economic Research, 2025).
Two patterns emerge here:
Tutoring is mainstream: ≈1 in 10 messages across the platform are teaching/tutoring; within Practical Guidance, it’s more than a third.
Asking is more common that doing: About 49% of those using ChatGPT for “learning” ask for information, which raises the question — are these actually learning interactions, or just the equivalent of Googling? Around 40% of interactions are classes as “doing” — i.e. the user actively learns with ChatGPT by participating and producing something, e.g. completing a task and getting feedback.
The bottom line: At unprecedented scale, people are already using AI to learn. The challenge now is ensuring that what they do—and how models respond—maps to how humans actually learn best.
The Illusion of Learning
The usage trend is encouraging—more Asking and a lot of tutoring—but there’s a catch. Many interactions still optimise for ease: quick answers, instant drafts, heavy scaffolds.
These interactions can feel productive in the moment yet often fail to produce learning that transfers to new problems. Neuroscientist and other experts call this the illusion of learning: when fluency (information looks familiar, work feels smooth) is mistaken for measurable improvements in mastery (Soderstrom & Bjork, 2015; Dunlosky et al., 2013).
Three forces drive the illusion of learning:
Fluency bias. Re-reading, highlighting, or skimming AI summaries makes material look clear without strengthening memory traces or problem schemas (Dunlosky et al., 2013).
Performance ≠ learning. Immediate performance during study (e.g., breezing through blocked practice) can go up even as long-term retention and transfer go down (Soderstrom & Bjork, 2015; Rohrer & Taylor, 2007).
Over-scaffolding. Worked examples and step-by-step hints help novices—but if support isn’t faded, learners don’t build independent problem-solving (Sweller & Chandler, 1994; Renkl, 2005).
AI can amplify these traps: it’s exceptionally good at making things look easy—polished summaries, perfect code, frictionless outlines. If we only consume those outputs, we outsource the very mental work that builds durable knowledge (Karpicke & Roediger, 2008).
Here’s a 60-second reality check I use when people tell me they learn with AI to help to spot the illusion of learning:
1) Recall without cues.
Close the tab. On a blank page, write the core ideas from memory—definitions, steps, and one example.
Why it matters: If you can’t retrieve it unaided, you’ve likely built fluency, not knowledge (Karpicke & Roediger, 2008).2) Explain it simply.
Give a step-by-step explanation to a novice (or your future self). No jargon; one tight analogy.
Why it matters: Self-explanation reveals gaps and deepens understanding (Chi et al., 1989).3) Choose the method, not just do the method.
Tackle a mixed set of problems and name the strategy first for each item.
Why it matters: Interleaving forces discrimination and transfer (Rohrer & Taylor, 2007; Brunmair & Richter, 2019).4) Perform under constraints.
Do a timed, rubric-anchored task that matches the real performance (e.g., a 20-minute essay or coding kata), then score it.
Why it matters: Authentic assessment predicts future performance (Gulikers et al., 2004).5) Retain it next week.
Put two spaced reviews on the calendar (e.g., +2 days, +10 days) and test yourself again.
Why it matters: Without spacing, retention decays—even when today felt great (Cepeda et al., 2006).
Next time you “learn” something with AI, score yourself: 0–2 “yes” = illusion likely. 3–4 “yes” = partial learning; target the weak spots. 5/5 “yes” = durable knowledge in progress.
The 10 Principles of Substantive Learning
The trap of the illusion of learning is real, but there is a the way out. Below, I’ve put together 10 evidence-based principles that convert “feels like learning” into actual gains in memory, understanding and transfer.
Think of them as a design spec for every AI-assisted study session—and a product checklist for anyone building learning tools. Each principle is grounded in decades of research (I’ll note the effect size so you can see the practical impact: roughly 0.2 = small, 0.5 = medium, 0.8 = large) and paired with a simple “use ChatGPT like this” prompt.
Taken together, they add the right kind of friction—retrieval, self-explanation, interleaving, feedback, spacing—so your time with AI stops polishing answers and starts building durable skills.
Here’s what this might look like in practice:
1) Embrace the struggle: the “desirable difficulty” zone
What: Learning sticks when it’s effortful but doable; too easy breeds a “familiarity illusion,” too hard leads to disengagement.
Why: Guided discovery with scaffolding shows medium effects (d ≈ 0.40–0.50) because it keeps learners in that productive challenge zone (Alfieri et al., 2011).
Try this when Learning with AI:Difficulty ladder: “Here’s a solved example [paste]. Make a new one with the same concept but different numbers → add one twist → turn it into a word problem.”
2) Do before you know: productive failure / problem-based learning
What: Attempting a problem before instruction “primes” your brain to value the explanation.
Why: Problem-based learning improves transfer (d ≈ 0.30–0.50) (García et al., 2021), and productive failure yields d ≈ 0.36 (Loibl et al., 2017).
Try this when Learning with AI:Safe simulation: “Pose a novel problem. Don’t teach me yet. Let me try. Then offer a minimal hint → deeper hint → full solution only after I explain my approach.”
3) Treat content as a resource, not the destination
What: Reading ≠ learning. Use content to solve an active problem you’ve already tried.
Why: Prior struggle creates a “need to know,” deepening processing (Loibl et al., 2017).
Try this when Learning with AI:
Pre-reading primes: “Before I read this French Revolution chapter, generate three analytical questions I probably can’t answer yet to focus my reading.”
4) Practice how you’ll perform: authentic assessment
What: You learn what you practice. Make practice mirror the final performance.
Why: Performance-based assessment predicts future performance with large effects (d ≈ 0.80–1.00) (Gulikers et al., 2004).
Try this when Learning with AI:Case → memo: “From this article [paste], create a 1-page CEO case with a decision point. I’ll write a memo. Grade me against this rubric [paste].”
5) Close the loop: feedback as a superpower
What: Targeted, actionable feedback is the single biggest lever.
Why: Meta-analyses show medium average effects (d ≈ 0.40) when feedback answers: Where am I going? How am I going? Where to next? (Hattie & Timperley, 2007).
Try this when Learning with AI:Three-question frame: “Act as a writing tutor. Assess only my thesis and hook using: goal → current performance → concrete next steps.”
6) Make memory do the work: retrieval practice
What: Pulling information from memory beats re-reading for long-term retention.
Why: Practice testing delivers medium-to-large effects (d ≈ 0.46–0.65) (Adesope et al., 2017).
Try this when Learning with AI:Varied retrieval: “Using the doc attached [paste] quiz me on the Krebs cycle with (1) a “near miss” MCQ, (2) a fill-in-the-blank, (3) an explain-it-to-a-teen in my own words.”
7) Defeat the cram: spacing
What: Space study sessions so you revisit topics just before you forget them.
Why: Spacing yields medium effects (d ≈ 0.42) on long-term retention (Cepeda et al., 2006).
Try this when Learning with AI:
Plan and design for spacing: “I have a mater’s level exam in 6 weeks on derivatives, integrals, series [curriculum attached]. Build a weekly plan that uses the spacing method to resurface older topics at optimal intervals. For each weekly plan, include suggested content and activities to help me to learn via the spacing method [attach doc on the spacing method].”
8) Build flexible knowledge: interleaving
What: Mix problem types so you must select the right strategy each time.
Why: Interleaving outperforms blocked practice with d ≈ 0.45–0.50 (Brunmair & Richter, 2019).
Try this when Learning with AI:Plan and design for interleaving: “I have a mater’s level exam in 6 weeks on derivatives, integrals, series [curriculum attached]. Build a weekly plan that uses the interleaving method to test my ability to apply key concepts in a variety of contexts. For each weekly plan, include suggested content and activities to help me to learn via the interleaving [attach doc on interleaving].”
9) Learn together: social cognition & the protégé effect
What: Explaining to others surfaces gaps and consolidates understanding.
Why: Cooperative learning shows strong effects when designed with positive interdependence and individual accountability (d ≈ 0.64) (Johnson et al., 2000).
Try this when Learning with AI:Debate prompts: “Moderate a CRISPR ethics debate. Provide 3 propositions, each with a brief ‘for’ and ‘against’ to kick us off, then facilitate.”
10) Respect bandwidth: manage cognitive load
What: Working memory is limited; reduce extraneous load so you can invest in germane load (schema building).
Why: Worked examples and thoughtful sequencing show medium-to-large effects (d ≈ 0.50–0.60) (Sweller & Chandler, 1994).
Try this when Learning with AI:Progressive disclosure: “Teach me the Krebs cycle [doc attached] in layers: one-sentence purpose → high-level analogy → main stages → detailed steps.”
Conclusion
If the data on ChatGPT published this week says anything, it’s this: learning isn’t a sideshow for AI anymore: it’s taking centre stage. Billions of messages each week already look like tutoring. The risk is we mistake fluency for mastery and settle for quick answers that don’t transfer. The opportunity is to turn that firehose into deliberate practice: retrieval, self-explanation, interleaving, authentic performance, feedback, spacing—the right kind of friction that makes knowledge stick.
This is where the “global tutor” becomes real. Not when AI writes our essays, but when it coaches our thinking: hints before answers, difficulty that adapts, feedback tied to clear goals, spaced review on the calendar. That’s how we move from “Asking” to actually learning.
Or, to quote Demis Hassabis (CEO of DeepMind Technologies) in a recent interview: the most important skill now is “learning how to learn.” The twist is that we can (and should) prompt AI to help us do exactly that—to design our sessions around the 12 principles, not around convenience. Ask for a problem before the explanation. Tell it to quiz you, not coddle you. Demand a rubric, not a pat on the back. Schedule the next recall, not the next read.
If you’re a learner, pick one topic and run the 60-second reality check. Apply two principles today (retrieval + spacing is a great start). If you’re a builder, bake these principles into defaults—Socratic modes, faded scaffolds, assessment-grade feedback, calendar-native spacing—so good learning isn’t an expert trick, it’s the baseline.
We are already treating AI like a tutor. Now let’s make it a great one—by learning how to learn, and by telling our AI to teach us like we mean it.
Happy experimenting!
Phil 👋
PS: Want to hone your pedagogical expertise and explore the impact of AI on your day to day work with me and a group of people like you? Apply for a place on my AI & Learning Design Bootcamp.



