The AI Content Explosion: What Your Learners Actually Think (And Why It Matters)
Aka, how to build AI content that engages learners & enhances learning
Hey Folks!
Last week, Google education published its year in review for 2025. In it, they showcased over 150+ AI tools built primarily to do two things: 1. generate content (video, images, infographics etc) and 2. deliver AI-powered teaching assistants at scale.
Meanwhile, in 2025, ChatGPT, Claude, Gemini, and a tonne of other AI platforms made it easier and more possible than ever a single instructional designer to produce, in minutes, what once took teams weeks to build. Accordingly, adoption is accelerating. A recent survey I ran with Synthesia shows that we are auto-generating more videos, images, diagrams, question banks and other educational content with AI faster and at a greater volume than ever before.
So, as 2025 comes to a close, one thing is very clear: AI-powered content generation has become industrialised. Educators are increasingly swapping traditional videos for synthetic AI talking heads. AI-generated images, diagrams and audio are already becoming ubiquitous, while personalised AI tutors and chatbots are becoming increasingly common parts of the learning experience infrastructure rather than experimental “blow your mind” add-ons.
In the last year, the volume of AI-generated learning experiences has not just accelerated — it’s exploded. Amidst all of this, the conversation among educators and product developers has focused overwhelmingly on production efficiency rather than learner experience and impact.
We often ask: “Can we generate this faster?” or “Can we personalise this at scale?” Much more rarely do we ask the questions that actually matter:
What do learners think and feel about all this AI-generated content? And how is the AI-generated learning experience shaping their engagement and outcomes?
In this week’s blog post I explore what the “typical” learning experience looks like at the end of 2025, share what we know about how learners feel about the rise of the AI-first learning experience, and ask: how is this impacting learner engagement and outcomes?
Let’s dive in!
The Learner Experience in 2025
By late 2025, AI-generated content is deeply embedded across the learning landscape. The typical learner now encounters:
Synthetic instructors and AI video: Research shows AI-generated videos can match human videos on comprehension when scripts and pedagogy are equivalent, though learners still prefer human presence for relational and ethically complex content (Deng et al., 2024; Mills, 2024; Leiker et al., 2023).
Here’s an example of the sort of AI-generated content we regularly provide for learners featuring my own AI avatar, created using Colossyan - an emerging favourite for video-generation among instructional designers.
Personalised AI tutors and chatbots: Students appreciate 24/7 availability and non-judgmental feedback, but often copy outputs directly or use bots as answer machines rather than thinking partners, leading to reduced self-regulated learning (Wu et al., 2025; Assessing Student Readiness and Perceptions of ChatGPT Study, 2024; Perceptions and Usage of AI Chatbots in HE, 2024).
AI-generated assessments at scale: Question banks, adaptive quizzes, and auto-generated practice problems give learners abundant material—but quality issues (ambiguous items, factual errors) persist and erode trust when not reviewed by experts (Ahmed et al., 2025; Sohrabi et al., 2024).
Personalised learning pathways: Content is increasingly tailored to individual preferences, which boosts engagement but reduces exposure to productive struggle and diverse perspectives (The Untold Story of Training Students with GenAI, 2024; PAIGE Study on Personalized AI Podcasts, 2024).
Multimodal AI content: Text, audio, images, and video are combined in ways that can either support understanding (when designed well) or overwhelm cognitive capacity (when designed poorly) (Learners’ Acceptance of Multimodal AIGC Study, 2025; Leveraging Feature Engineering in AI-Generated Content Study, 2025).
Automated feedback: Scaling feedback through AI frees instructor time but risks generic, misaligned, or inaccurate guidance without human oversight (Harnessing GenAI for Automated Feedback Review, 2024).
Learner Attitudes to the Explosion of AI-Generated Learning Content
While (tellingly) very little research has been done into learner attitudes toward AI-generated content, it is possible to extract insights from a range of recent studies to get a clear picture of what students think and feel about specific types of AI-generated learning experiences.
TLDR: Learners are cautiously optimistic but deeply selective about what they’ll accept from AI. Their attitudes vary dramatically depending on the type of content and how it’s used—and these attitudes matter because they directly shape engagement, trust, and learning outcomes.
AI Chatbots and Tutors: Most Appreciated, But Often Misused
What learners like:
24/7 availability and instant responses without having to wait for instructor office hours (Assessing Student Readiness Study, 2024; Wu et al., 2025)
Non-judgmental feedback that lets them ask “stupid questions” without embarrassment (Perceptions and Usage of Chatbots Study, 2024; Wu et al., 2025)
Concept clarification and alternative explanations when they’re stuck on something specific (Assessing Student Readiness Study, 2024; Wu et al., 2025)
Convenience and efficiency for getting quick answers or checking understanding (Perceptions and Usage Study, 2024)
What learners don’t like (and what they actually do):
Despite appreciating chatbots, learners simultaneously report using them to “just get the answer” rather than as thinking partners—undermining the very learning the tool was supposed to support (Assessing Student Readiness Study, 2024; Perceptions and Usage Study, 2024; Wu et al., 2025)
Learners express uncertainty about when to trust bot responses, especially for complex or nuanced topics (Wu et al., 2025)
Students prefer that chatbots are clearly labeled as AI and dislike when AI tutors are presented as human instructors without disclosure (AI Chatbots’ Role Study, 2025; Wu et al., 2025)
AI-Generated Video & Synthetic Instructors: Acceptable, But Not Preferred
What learners accept:
Procedural, step-by-step demonstrations where AI video performs as well as human video when scripts and pedagogy are equivalent (Deng et al., 2024; Mills, 2024; Leiker et al., 2023)
Scalable, consistent explanations for foundational concepts that don’t require nuance or emotional intelligence (Deng et al., 2024)
What learners don’t like:
Realistic AI avatar instructors (clones or synthetic humans) trigger discomfort, lower social presence, and reduce learners’ willingness to ask questions or challenge explanations—especially when learners feel the avatar is trying to “pass” as human (Ebner et al., 2024; Lim & Ullah, 2023)
For ethics, identity, or emotionally charged topics, learners explicitly prefer and benefit from human instructors; AI presenters feel inappropriate and inadequate for these contexts (Ebner et al., 2024; Lim & Ullah, 2023)
Learners report lower connection, trust, and motivation when all instruction is delivered by synthetic presenters without visible human presence (Ebner et al., 2024; Wu et al., 2025)
Uncanny valley effects with hyper-realistic avatars—learners find them “off” or unsettling, preferring either clearly stylized animated characters or real humans over almost-but-not-quite-human avatars (Ebner et al., 2024)

AI-Generated Examples & Model Answers: Helpful, But Timing & Quantity Is Everything
What learners like:
Seeing “what good looks like” through AI-generated examples helps them understand standards and expectations, especially for writing and problem-solving tasks (Lira et al., 2025)
Abundant practice materials generated by AI give learners more opportunities to work with varied examples (Lira et al., 2025)
What learners don’t realise (but research shows):
When examples are shown BEFORE they attempt the task themselves, learners like the convenience but unknowingly undermine their own learning—they become dependent on AI to structure their thinking (Lira et al., 2025)
When examples are shown AFTER an initial attempt, learners show significant learning gains on later AI-free tasks—the “attempt → compare → reflect” sequence preserves the cognitive effort that drives learning (Lira et al., 2025)
Learners don’t spontaneously notice this difference in their own learning; they rate both experiences positively but only one actually helps them develop independent capability (Lira et al., 2025)
AI-Generated Assessments & Practice Questions: Valued for Volume, Undermined by Errors
What learners like:
Abundant practice opportunities with AI-generated quizzes and question banks give them more material to study with (Ahmed et al., 2025; Sohrabi et al., 2024)
Variety in question formats and difficulty levels helps them prepare more thoroughly (Ahmed et al., 2025)
What learners don’t like:
Ambiguous question stems, poorly aligned distractors, or incorrect answer keys erode trust and cause frustration—when learners encounter obviously flawed AI-generated questions, they lose confidence in the course and the instructor (Ahmed et al., 2025; Sohrabi et al., 2024; Wu et al., 2025)
Learners want the ability to flag problematic questions and have them reviewed; when errors go unaddressed, it damages credibility (Sohrabi et al., 2024; Wu et al., 2025)
Personalised AI Content: Engaging, But Concerns About Over-Personalisation
What learners like:
Personalised explanations, pacing, and examples that adapt to their preferences and performance increase engagement and enjoyment (PAIGE Study, 2024; Learners’ Acceptance of Multimodal AIGC, 2025)
Multiple modalities (text + audio + video + images) when well-designed help them learn in ways that match their preferences and manage cognitive load (Multimodal AIGC Study, 2025)
What learners worry about:
Over-personalisation reducing exposure to challenge and diverse perspectives—students express concern that too much adaptation will keep them in a “comfort zone” and prevent them from developing resilience and broader thinking (Untold Story of Training Students Study, 2024; PAIGE Study, 2024)
Isolation from collaborative learning when personalised AI pathways reduce opportunities to work with peers (Untold Story Study, 2024)
AI-Generated Explanations and Summaries: Appreciated, But Distrusted
What learners like:
Simplified explanations that reduce complexity and anxiety, especially for struggling learners tackling difficult material (Wu et al., 2025)
Quick summaries for getting the gist of readings or lectures efficiently (Wu et al., 2025)
What learners don’t like:
Concerns about hallucinations, factual errors, and oversimplification—especially in specialised or high-stakes domains like medicine, law, or engineering (Sohrabi et al., 2024; Integrating AI into Orthodontic Education, 2025; Wu et al., 2025)
Learners want AI explanations to complement, not replace, primary sources and human expertise; they worry about losing nuance and depth (Wu et al., 2025).
Do Learners’ Attitudes & Feelings Matter?
A reasonable question to ask at this point is: does it actually matter what learners think and feel about what we produce for them? The answer is a resounding yes—because learner attitudes determine not just whether they’ll use AI tools, but if and how much they will actually learn.
Research shows that learner attitudes and feelings about a learning experience (and those who created it) are among the strongest predictors of engagement—sometimes stronger than the technical quality of the content itself (Deng et al., 2024). In other words: a well-designed learning experiences loses power and impact if learners don’t perceive it as credible or worth their effort.
Here’s what we know from the research:
If learners think content is credible, clear, and worth their attention, they engage deeply with it. When that engagement is paired with good learning design—tasks that require thinking, opportunities for independent practice, and reflection—engagement translates to actual learning gains (Deng et al., 2024).
If learners think content is “just an AI shortcut” or lower quality, they disengage—treating it as something to get through rather than learn from. This happens even if the content is pedagogically sound, because their perception overrides the design (Deng et al., 2024).
Attitudes towards AI and performance expectancy (the belief that “this tool will help me do better”) are the strongest predictors of whether learners will adopt AI tools and use them effectively (Meta-Analysis of Intention to Use, 2024; Wu et al., 2025). When learners feel that AI is useful and trustworthy, they use it more—which can amplify both positive and negative effects, depending on how it’s designed into the learning experience.
TLDR: Your learners’ attitudes don’t just influence engagement metrics: they determine if and how much they learn and develop.
What This Means for Design
The research is clear: AI use can both positively and negatively impact learning outcomes.
When AI-generated content is embedded in well-structured learning activities, it can enhance engagement and outcomes; when it’s used without intentional design principles, it might still increase learner satisfaction but undermine the very cognitive processes that produce durable learning (Lira et al., 2025; Prather et al., 2024; Marzouki et al., 2024; Wu et al., 2025).
The recent “Coach not Crutch” experiment provides perhaps the clearest evidence of this in action. Here, identical AI tools produced opposite outcomes depending on task design. When learners drafted first and then compared their work to AI examples, they improved on later AI-free tests. When learners could access AI solutions without first attempting the problem themselves, learning gains disappeared (Lira et al., 2025).
The pattern holds across all content types—video, assessments, feedback, chatbots, personalised pathways—the medium matters far less than the pedagogical and learning structure around it.
Generated and delivered without pedagogical intent, AI-generated content tends to:
Reduce cognitive effort and self-regulation (Marzouki et al., 2024; Prather et al., 2024)
Increase satisfaction and confidence without corresponding skill gains (Wu et al., 2025 Prather et al., 2024; Marzouki et al., 2024)
Benefit advanced learners while quietly harming novices (Prather et al., 2024; Wu et al., 2025)
Replace human connection and metacognitive reflection with convenience and automation (Ebner et al., 2024; Lim & Ullah, 2023; Wu et al., 2025)
But when deliberately integrated into structured learning designs that require initial attempts, comparison and critique, reflection, and independent performance, the same AI tools can enhance learning, build skills, and scale high-quality educational experiences (Automated Feedback Review, 2024; Lira et al., 2025; Wu et al., 2025).
How to Use AI in Instructional Design, According to the Research
So, how do we optimise the benefits of AI-generated learning content, while mitigating its risks? Here are ten practical tips based on the research/
1. AI Avatars / Virtual Humans
✅ DO
Use avatars for specific, bounded roles (delivering announcements, narrating demos, modelling procedures) while keeping human instructors visible for discussion, feedback, and complex judgment (Lim & Ullah, 2023; Ebner et al., 2024).
Use clearly stylised animated characters or transparently labeled AI avatars rather than hyper-realistic human clones; learners prefer either clearly artificial or real humans over “almost-human” avatars that trigger uncanny valley responses (Ebner et al., 2024; Lim & Ullah, 2023).
Use AI video for scalable exposition, procedural demos, and content that updates frequently where consistency matters more than emotional connection (Lim & Ullah, 2023; Ebner et al., 2024; Deng et al., 2024).
Prefer clearly stylised AI characters or transparently synthetic voices, and always label AI-generated presenters as such to avoid deception and uncanny valley effects (Lim & Ullah, 2023; Ebner et al., 2024).
❌ DON’T
Don’t deploy almost-human synthetic instructors that claim to be the real teacher without clear disclosure; this feels deceptive, triggers discomfort, and reduces learners’ willingness to ask questions or challenge explanations (Lim & Ullah, 2023; Ebner et al., 2024).
Don’t use hyper-realistic cloned instructor avatars without disclosure or consent; these can trigger uncanny valley responses, reduce comfort, and damage trust when learners discover the deception (Ebner et al., 2024).
Don’t replace all visible human instruction with synthetic presenters, especially for ethics, identity, or emotionally charged topics where learners explicitly prefer and benefit from human presence (Lim & Ullah, 2023; Ebner et al., 2024).
2. AI Chatbots & Conversational Tutors
✅ DO
Frame chatbots as thinking partners for hints, alternative explanations, brainstorming, and checking understanding—not for producing final answers (Assessing Student Readiness and Perceptions of ChatGPT Study, 2024; Perceptions and Usage of AI Chatbots in HE Study, 2024; Wu et al., 2025; AI Chatbots’ Role in Online Learning Study, 2025).
Require learners to submit interaction artifacts (screenshots, transcripts) plus reflections on how the bot helped, what they learned, and what they’d do differently next time; this keeps self-regulation active (Assessing Student Readiness Study, 2024; Perceptions and Usage of Chatbots Study, 2024; Wu et al., 2025; AI Chatbots’ Role Study, 2025).
Clearly label bots as AI and explain their limitations; transparency helps learners calibrate trust and understand when to verify information (AI Chatbots’ Role in Online Learning, 2025; Wu et al., 2025).
❌ DON’T
Don’t position chatbots as “answer engines” for graded work; when students routinely paste prompts and copy outputs, research shows reduced self-regulated learning, weaker independent problem-solving, and declining critical thinking (Prather et al., 2024; Marzouki et al., 2024; Wu et al., 2025).
Don’t hide that a tutor is AI-powered; undisclosed AI reduces learners’ ability to calibrate trust and undermines academic integrity norms (AI Chatbots’ Role in Online Learning Study, 2025; Wu et al., 2025).
Don’t allow chatbots to provide complete solutions to graded assignments; this bypasses the generative cognitive work essential for learning (Prather et al., 2024; Wu et al., 2025).
3. AI-Generated Examples and Model Answers
✅ DO
Require learners to draft/attempt first, then show AI examples side-by-side for comparison; ask them to annotate differences, explain what changed and why, then revise or redo the task without AI. This “attempt → compare → reflect → independent practice” pattern drove significant learning gains in research (Lira et al., 2025).
Use AI-generated examples to show “what good looks like” after learners have attempted the task themselves, helping them understand standards and expectations for writing and problem-solving (Lira et al., 2025).
Provide abundant AI-generated practice materials for formative learning, giving learners more opportunities to work with varied examples (Lira et al., 2025).
❌ DON’T
Don’t show AI examples before learners attempt the task themselves; when students start from AI outputs and edit them, they bypass the generative cognitive work that drives learning and become dependent on AI to structure their thinking (Lira et al., 2025).
Don’t let learners use AI to complete assignments without first attempting independent work; research shows this eliminates learning gains even when learners feel satisfied and confident (Lira et al., 2025).
4. AI-Generated Practice Questions, Quizzes & Assessments
✅ DO
Use AI to generate large pools of practice items for low-stakes formative assessment, then have experts review for clarity, alignment, and correctness (Ahmed et al., 2025; Sohrabi et al., 2024; Revolutionizing eLearning Assessments Study, 2024).
Invite learners to flag problematic questions and discuss why they’re flawed; this turns quality control into a metacognitive exercise and teaches evaluative thinking (Ahmed et al., 2025; Sohrabi et al., 2024; Revolutionizing eLearning Assessments, 2024).
For high-stakes exams, always include human-authored or human-verified items to ensure quality and alignment with learning outcomes (Ahmed et al., 2025).
❌ DON’T
Don’t deploy unreviewed AI-generated items in summative assessments; studies show AI questions often have ambiguous stems, misaligned distractors, or incorrect answer keys, which can mis-measure learning and damage learner trust (Ahmed et al., 2025; Sohrabi et al., 2024).
Don’t signal that all AI-generated items are “official” without allowing learners to question or flag them; unaddressed errors reduce credibility (Sohrabi et al., 2024; Wu et al., 2025).
5. AI-Generated Explanations, Readings, & Summaries
✅ DO
Use AI to draft explanations, then have subject-matter experts review and refine them for accuracy, nuance, and alignment with learning outcomes (Sohrabi et al., 2024; Integrating AI into Orthodontic Education Review, 2025; Wu et al., 2025; Harnessing GenAI for Automated Feedback, 2024).
Turn verification into a learning task: ask students to fact-check AI summaries against trusted sources, identify gaps or errors, and rewrite sections in their own words; this builds critical literacy and keeps cognitive engagement high (Sohrabi et al., 2024; Integrating AI into Orthodontic Education, 2025; Wu et al., 2025; Automated Feedback Review, 2024).
Position AI explanations as complements to primary sources, not replacements; maintain learners’ exposure to complex, nuanced material (Wu et al., 2025).
❌ DON’T
Don’t publish AI explanations as-is without expert review, especially in specialised or high-stakes domains; unreviewed AI content often contains subtle errors, oversimplifications, or hallucinations that novices cannot detect (Sohrabi et al., 2024; Integrating AI into Orthodontic Education, 2025;Wu et al., 2025).
Don’t replace all primary sources or complex readings with AI summaries; this weakens learners’ tolerance for ambiguity and reduces deep reading skills (Wu et al., 2025).
6. AI-Generated Feedback
✅ DO
Use AI to provide immediate, scalable feedback on routine, well-structured tasks (e.g., grammar, formula application, procedural steps) to free up instructor time for higher-value interactions (Harnessing GenAI for Automated Feedback Review, 2024).
Reserve human feedback for complex judgments, conceptual misconceptions, and high-stakes work (Automated Feedback Review, 2024).
Design “feedback loops” where learners respond to AI feedback, revise their work, then get human review on their revision; this combines efficiency with quality (Automated Feedback Review, 2024).
Explicitly teach learners how to evaluate and act on AI feedback critically rather than accepting it uncritically (Automated Feedback Review, 2024).
❌ DON’T
Don’t fully automate feedback on complex, high-stakes work without human review; generic or inaccurate AI feedback can mislead learners, especially novices who lack the expertise to recognise when feedback is off-target (Automated Feedback Review, 2024).
Don’t use AI feedback as a replacement for all human interaction; learners value and need timely human responses for motivation, clarification, and relational support (Wu et al., 2025; Automated Feedback Review, 2024).
7. AI-Generated Video
✅ DO
Use AI video for scalable exposition, procedural demos, and content that updates frequently; maintain visible human presence through live Q&A, discussion facilitation, and personalized feedback (Lim & Ullah, 2023; Ebner et al., 2024; Deng et al., 2024).
Pair AI video with active learning tasks (annotation, critique, application) to prevent passive consumption (Deng et al., 2024; Mills, 2024).
Use AI-generated videos when scripts and pedagogy are equivalent to human instruction for procedural and foundational content (Deng et al., 2024; Mills, 2024; Leiker et al., 2023).
❌ DON’T
Don’t deploy AI video without reviewing scripts for accuracy; AI-generated narration can contain errors or bias (Deng et al., 2024; Mills, 2024).
Don’t replace all visible human instruction with synthetic presenters, especially for ethics, identity, or emotionally charged topics where learners explicitly prefer and benefit from human presence (Lim & Ullah, 2023; Ebner et al., 2024).
Don’t use hyper-realistic cloned instructor avatars without disclosure or consent; these trigger uncanny valley responses, reduce comfort, and damage trust (Ebner et al., 2024).
8. AI-Generated Audio (Podcasts, Narration, Dialogues)
✅ DO
Use AI audio for controllable-difficulty listening practice, volume, and variety, but complement it with human-recorded speech for natural prosody and cultural modelling (Using AI-Generated Audio for Italian Listening Study, 2025; Learners’ Acceptance of Multimodal AIGC Study, 2025).
For personalised AI-generated podcasts, ensure alignment with learning objectives and pair with active tasks (note-taking, summarizing, question generation) to prevent passive listening (PAIGE Study on Personalized AI Podcasts, 2024).
❌ DON’T
Don’t rely solely on AI-generated audio for language learning or contexts where prosody, tone, and cultural nuance matter; synthetic voices can flatten these elements and limit communicative competence (Using AI-Generated Audio for Italian Listening, 2025; Learners’ Acceptance of Multimodal AIGC, 2025).
10. AI-Generated Images
✅ DO
Have subject-matter experts review AI-generated diagrams, illustrations, and visual aids for accuracy before use in instruction (Evaluating Anatomical Accuracy of AI-Generated Images Study, 2024; Sohrabi et al., 2024).
Build media literacy tasks: ask learners to identify whether images are AI-generated, discuss what cues they used, and reflect on implications for trust and verification (Detection of AI-Generated Images Study, 2024).
Audit image generation prompts for bias and stereotyping before deploying images representing people or cultures (Easily Accessible Text-to-Image Generation Amplifies Stereotypes Study, 2023).
❌ DON’T
Don’t use AI-generated images without checking for factual accuracy, bias, or stereotyping; text-to-image models are known to amplify demographic stereotypes and produce anatomically incorrect diagrams (Easily Accessible Text-to-Image Stereotypes Study, 2023; Evaluating Anatomical Accuracy of AI-Generated Images, 2024).
Don’t assume learners can reliably distinguish AI-generated images from authentic ones; research shows significant variation in detection ability, raising media literacy concerns (Detection of AI-Generated Images Study, 2024).
Research TLDR
The same AI tool can produce opposite learning outcomes depending on task design. When learners draft first then compare to AI examples, they improve on later AI-free tests.
When learners access AI solutions without attempting work themselves, learning gains disappear—even though satisfaction remains high in both cases (Lira et al., 2025).
Your design choices matter more than the AI tool itself.
Conclusion: The Road Ahead for AI & Education
As 2025 draws to a close, we stand at a critical juncture. The industrialisation of AI-generated learning content is not slowing down—if anything, it’s accelerating. Google’s 150+ new education AI tools are just the beginning. More platforms, more features, more automation are coming in 2026 and beyond.
On thing that the research makes unmistakably clear is this: the question is no longer whether AI will generate our learning content, but whether we’ll design that content to actually support learning.
The emerging picture shows us something contradictory and urgent: learners mostly like AI-generated content. They find it convenient, accessible, and engaging. They use it intensively. They also report feeling more confident and supported. And yet, beneath that positive surface, something troubling is happening—especially for novices. Independent problem-solving is weakening. Critical thinking scores are declining. Self-regulated learning behaviours are eroding. The confidence learners feel doesn’t match the capability they’re building.
This isn’t a technology problem. It’s an instructional design problem.
The same AI tool that undermines learning when used as a crutch can enhance learning when used as a coach. The difference isn’t the sophistication of the model, the realism of the avatar, or the personalisation of the content. The difference is whether the learning design preserves the hard thinking, maintains human connection, scaffolds metacognition, and regularly verifies independent capability.
As we head into 2026, instructional designers, educators, and learning experience creators face a choice: we can keep asking “Can we generate this faster?” and optimize for production efficiency, satisfaction scores, and engagement metrics—while quietly eroding the very capabilities our learners need to develop.
Or we can start asking different questions:
Does this design preserve the cognitive effort that drives learning?
Are we checking what learners can do independently, not just with AI support?
Have we positioned AI as a thinking partner, not an answer machine?
Is human presence still visible where it matters most?
The research tells us that learners can’t reliably answer these questions for themselves. They feel great while learning less. They’re confident while becoming dependent. They’re satisfied while their skills quietly decay.
That means the responsibility falls to us—the designers, educators, and creators of learning experiences—to ask these questions on their behalf. To design with intention. To measure what matters. To resist the siren call of infinite content generation and instead focus on finite, deliberate, research-informed design.
The AI content generation boom isn’t going away. But whether it becomes a revolution in learning or just a revolution in content production efficiency is still up to us. The choice and future of human learning is ours.
Happy innovating!
Phil 👋
PS: Want to dive deeper into how to use AI to 10X the value and impact of human learning? Check out my AI & Learning Design Bootcamp where we explore exactly how to operationalise learning science with emerging technologies.




This is the most important article a learning designer can read this year! Season’s greetings, Dr Phil, and fellow boot camp colleagues!