Beyond the Hype: What 18 Recent Research Papers Say about How to Use AI in Instructional Design
Aka, how to use (and how not to use) AI in your day to day work
Hey folks!
The last couple of years have brought a relentless stream of AI promises to instructional design. Every conference, every LinkedIn post, every vendor pitch seems to suggest we're on the brink of a “professional transformation” that will make everything faster, cheaper and better.
Meanwhile, many of us are quietly concerned that AI might not be the 100% good news story we're being sold and that in practice the reality is... more complicated.

Thankfully, in the last 18 months amidst all of the hype we have seen a growing body of peer-reviewed research on AI & Instructional Design, providing us with systematic, robust and reliable data on how AI is being used in Instructional Design and how it’s impacting (for better or worse) the speed and quality of our work.
Perhaps unsurprisingly, the results are far more nuanced than either the AI evangelists or the skeptics would have you believe. Some findings might surprise you — others will validate concerns you've probably been too polite to voice in all those AI webinars you've attended recently.
TLDR:
The Efficiency Sweet Spot: AI excels at automating routine tasks (65% time savings in lesson planning, 95% in assessment generation) but requires deep human oversight and insight for quality and context.
The Creative Balance: AI can increase idea diversity by 47% when used as a brainstorming partner, but over-reliance reduces creative uniqueness by 32%.
The Skills Imperative: Designers who invest in prompt engineering achieve 58% better results—this is now a core professional skill, not a technical add-on.
Let's dive into what the research actually tells us, and what this means for how you should (and shouldn’t) use AI in your day to day work. 🚀
Part I: Where AI Goes Wrong in Instructional Design
The research points to three critical risks that every instructional designer should understand before diving into AI adoption.
Risk #1: Pedagogical Blind Spots
The integration of AI into instructional design offers significant potential for efficiency and scalability. However, research consistently highlights critical risks associated with rigid, template-driven AI tools that constrain instructional creativity and reduce instructional designers' (IDs) agency. These tools often impose linear workflows that limit flexibility, reflective practice, and the ability to adapt lessons to diverse learner needs.
What the research reveals: Børte & Lillejord's study found that tools with rigid Instructional Design workflows significantly reduce user agency, making designers feel "boxed in" and less able and likely to tailor instruction to their students' needs. Researchers found that more flexible frameworks and interfaces like the ILUKS planner and ChatCLD, enable more free-flowing refinement of AI-generated learning designs across multiple pedagogical dimensions, improving contextual fit and user satisfaction.
Beyond workflow and creativity constraints, AI's fundamental pedagogical limitations pose a broader challenge. AI-generated content frequently lacks a deep understanding of learning science principles and struggles with contextual adaptation. Hu et al. discovered that 78% of GPT-4-generated math lesson plans required "significant adjustments" to align with local standards and learner backgrounds. Similarly, Krushinskaia et al. and Luo et al. observed that AI often reproduces instructional design language superficially without meaningful adaptation to specific learner needs.
Much of the research reports that AI's outputs frequently contain inaccuracies and fabricated information. Choi et al. and Madunić & Sovulj reported up to 40% fabricated references in AI-generated content, while DaCosta & Kinsell and Yıldızhan Bora & Kölemen documented regular cases of misinformation and bias. These issues underscore the necessity of rigorous fact-checking and expert review.
Over-reliance on AI can also erode professional judgment; Krushinskaia et al. found that 40% of designers accepted AI suggestions without adaptation, leading to less creative and contextually appropriate lessons. Meanwhile, Yang & Stefaniak highlighted concerns about deskilling and the need for more critical reflection on AI's role.

To mitigate these risks, research advocates for treating AI outputs as drafts requiring expert validation and adaptation. Structured, human-centered frameworks like ChatCLD and ARCHED support iterative refinement and pedagogical alignment. Additionally, addressing equity and access challenges is vital to prevent widening learning gaps.
Regular reflection on AI's role and ongoing AI literacy training for all stakeholders ensure that AI remains a partner in instructional design rather than a crutch that diminishes expertise and learning quality.
TLDR:
Avoid rigid, linear AI tools that limit design creativity and agency
Use flexible, structured frameworks to refine AI outputs systematically
Always fact-check and adapt AI-generated content; never accept it blindly
Maintain human oversight to ensure nuance, context, and alignment with learning goals
Address equity and access to prevent widening learning gaps
Foster ongoing reflection and AI literacy to safeguard professional judgment and instructional quality
Risk #2: Ethics, Privacy & Bias
AI integration brings a host of ethical challenges that many instructional designers have yet to fully address. These risks are not theoretical: the research shows that issues of privacy, bias, copyright, and digital equity are already impacting learning environments.
What the research demonstrates:
Bias and Stereotyping: Bolick & da Silva's hands-on testing revealed some uncomfortable truths—AI tools perpetuated stereotypes in 28% of image outputs, raising concerns about the reinforcement of harmful biases in educational materials. DaCosta & Kinsell also noted that AI sometimes recommended media or delivery systems that were culturally or contextually inappropriate, highlighting the need for careful review.
Algorithmic Discrimination and Policy Gaps: Hodges & Kirschner's policy synthesis emphasised that AI-driven systems can amplify existing inequities and discrimination if not carefully monitored, especially when algorithms are trained on biased data sets or lack transparency.
Privacy and Consent Risks: The use of cloud-based AI tools introduces significant privacy risks. Bolick & da Silva identified that voice cloning and other generative AI features raised consent and data protection concerns. Sensitive learner data, if entered into cloud-based AI systems, could be exposed or misused. The ARCHED framework stressed the need for transparent, responsible and collaborative approaches to mitigate these risks.
Copyright and Intellectual Property: The proliferation of AI-generated media (images, text, audio) creates new copyright challenges for Instructional Designers. The research of Bolick & da Silva and Madunić & Sovulj separately highlighted the ambiguity around ownership and the importance of securing proper licenses for AI-generated assets.
Digital Equity and Access: Infrastructure barriers further complicate ethical AI adoption. Yıldızhan Bora & Kölemen documented how unreliable internet connectivity and limited device access prevented some students from fully engaging with AI-enhanced learning, effectively creating a two-tiered educational experience. This digital divide risks exacerbating existing inequities, particularly for marginalised learners.
Transparency and Stakeholder Communication: Studies such as ARCHED and ChatCLD emphasise the importance of being transparent with learners and stakeholders about the role, limitations, and risks of AI in instructional design. Clear communication helps manage expectations and fosters trust.
TLDR:
Conduct regular bias audits and secure proper licenses for all AI-generated assets
Never input sensitive learner data into cloud-based AI systems—use anonymised data or secure, on-premise solutions
Proactively address digital equity by providing alternatives for learners with limited access
Be transparent about AI's supplementary role and its limitations with all stakeholders
Monitor evolving legal and policy frameworks to ensure compliance and ethical practice
Risk #3: The Creativity Trap
When it comes to creativity in Instructional Design, the research paints a nuanced picture. While AI can be a powerful catalyst for innovative thinking (more on this later), over-reliance on automated and standardised design suggestions may actually stifle the creative thinking and critical reflection that drive innovative learning experiences and deepen impact.
Research findings:
Diminished Creative Diversity: Luo et al. found that excessive AI use correlated with a 32% reduction in unique assessment designs among instructional designers. The convenience of AI-generated templates and suggestions can lead to over-standardisation, with fewer novel or contextually tailored solutions emerging in course development.
Passive Consumption and Reduced Critical Thinking: Yıldızhan Bora & Kölemen's study of a digital photography course revealed that students who relied heavily on AI feedback reported diminished creativity and critical thinking. Instead of actively engaging with content, some learners became passive consumers of AI-generated suggestions, missing opportunities to develop their own ideas and problem-solving skills.
Risk of Homogenisation: Krushinskaia et al. observed that teachers who accepted AI-generated lesson suggestions without adaptation produced less creative and less contextually appropriate lessons. This trend risks homogenising instructional approaches and reducing the diversity of learning experiences.
Balancing AI and Human Ingenuity: Research by DaCosta & Kinsell and the ARCHED framework suggests that AI is most valuable as a brainstorming partner—expanding the solution space and providing raw material for human teams to refine. However, the creative process must be intentionally preserved through deliberate "human-only" design sessions and reflective practice.
Encouraging Reflection and Critique: Studies recommend requiring both students and designers to critique or build upon AI outputs, fostering metacognition and deeper learning. Regular audits of course materials for diversity of teaching approaches can help guard against over-standardisation and ensure a vibrant, creative learning environment.
TLDR:
Balance AI use with deliberate "human-only" design sessions to preserve creativity
Require personal reflection and creativity in assignments—encourage critique and adaptation of AI outputs
Regularly audit courses for diversity of teaching approaches to avoid over-standardisation
Use AI as a creative partner, not a replacement for human ingenuity and critical thinking
Part II: The Real Benefits of AI in Instructional Design
Understanding the risks of AI is crucial, but so too is understanding its real, measurable benefits. The same body of recent research that reveals AI's dangers also brings to light where it might genuinely transform instructional design for the better. Rather than dismissing AI wholesale or embracing it uncritically, the evidence points toward strategic integration—leveraging AI's strengths while maintaining human oversight in areas where it excels.
The research reveals four domains where AI consistently delivers value to instructional designers, often in ways that complement rather than compete with human expertise.
1. The Efficiency Revolution
AI is transforming instructional design by dramatically accelerating routine tasks—including assessment creation and lesson planning—while maintaining rigor and freeing up designers for higher-value, creative work.
Research evidence:
Dramatic Time Savings: Studies consistently report significant reductions in time spent on routine instructional design tasks. For example, Choi et al. found that ChatGPT reduced lesson planning time by 65%, while Cheng et al. reported that their TreeQuestion system cut MCQ generation time by 95%.
Scaling Assessment with Rigour: Cheng et al. demonstrated that AI-enabled assessment generation led to a 300% increase in assessment volume without sacrificing rigour, provided there was expert oversight. However, the research stressed that AI-generated assessments must be reviewed to ensure alignment with learner profiles, delivery modes, and cognitive objectives. Bloom's Taxonomy was highlighted as a useful structure for guiding AI prompts and ensuring assessments target the desired cognitive levels.
Rapid, Iterative Content Development: Dickey & Bejarano's GAIDE framework enabled fast, iterative development of course content and assessments, further supporting the efficiency gains reported across multiple studies.
Limits of Automation: While AI excels at drafting, assessment generation, and automating repetitive work, studies agree that creative, contextual, and strategic instructional design tasks still require substantial human input. Human oversight remains essential for ensuring quality, relevance, and alignment with learning goals.
TLDR:
Use AI for initial drafting, assessment generation, and other routine tasks to automate repetitive work and scale output
Maintain human oversight for quality control—AI is a support tool, not a replacement
Structure AI prompts with frameworks like Bloom's Taxonomy to guide assessment design at appropriate cognitive levels
Redirect the time saved to creative and strategic work that only humans can do effectively
2. Differentiation & Localisation at Scale
AI's ability to adapt design to specific learners and contexts represents one of its most promising applications, moving instructional design beyond "spray and pray" one-size-fits-all approaches toward tailored, needs-based learning mapped to specific personas, motivations and contexts.
Key research insights:
From Generic to Persona-Driven Design: Several studies highlight how AI empowers instructional designers to move away from generic, undifferentiated training and instead design learning experiences mapped to distinct learner personas. Madunić & Sovulj demonstrated that custom chatbots, when tailored to learner profiles, improved concept mastery by 29%—a result attributed to the chatbot's ability to adapt explanations and practice to individual needs, rather than relying on uniform content for all learners.
Adaptive Pathways and Real-Time Differentiation: Multiple studies report that AI-powered chatbots and adaptive learning systems can analyse learner interactions and performance data to dynamically adjust content, feedback, and assessments. This enables real-time differentiation—so learners receive support, challenges, and resources mapped to their current skill level, goals, and preferred learning modalities, rather than being forced through a rigid, uniform curriculum. This is especially effective for foundational or routine skills, freeing human experts to focus on more complex, context-sensitive instructional needs.
Frameworks for Transparent, Persona-Aligned Design: The ARCHED framework explicitly connects every AI-generated suggestion to pedagogical rationales, ensuring that adaptations are not only transparent but also mapped to specific learner objectives and contexts. This structured, multi-stage workflow helps IDs design with empathy and intentionality, using AI to generate, analyze, and refine learning objectives and assessments for distinct personas rather than defaulting to generic solutions.
Efficiency and Scalability Without Sacrificing Relevance: By automating content generation and analysis for different learner segments, AI enables organisations to scale personalised learning without multiplying design workload. This allows IDs to spend more time on strategic decisions—such as mapping content to personas' motivations, access needs, and prior knowledge—rather than on repetitive content creation.
Human Oversight Remains Essential: While AI can power much of the differentiation and persona-mapping at scale, studies consistently emphasise the need for human review and refinement—especially for advanced, sensitive, or highly contextual content. IDs must ensure that AI-driven pathways remain accurate, inclusive, and aligned with organisational and learner goals.
TLDR:
Use AI to design learning mapped to specific needs, motivations, and contexts—not just generic content for the masses
Implement adaptive learning pathways that adjust in real time to learner data, supporting differentiated instruction at scale
Leverage frameworks like ARCHED to ensure every AI-driven adaptation is pedagogically justified and transparent
Maintain human oversight for complex or sensitive content, but let AI handle routine differentiation and feedback
3. Amplified Creativity
AI is transforming instructional design by acting as a creative collaborator and expanding the diversity of design options available to teams. Rather than stifling creativity, research shows that AI can amplify it—serving as a brainstorming partner that rapidly generates a wide range of approaches, media, and activities that instructional designers might not have considered independently.
Evidence from the research:
Expanding the Solution Space: Luo et al. found that AI-assisted brainstorming increased idea diversity among instructional designers by 47%, demonstrating that AI can help teams break out of habitual patterns and explore new instructional strategies and formats. DaCosta & Kinsell's research further showed that ChatGPT-4 facilitated creative exploration of delivery systems and activity options, streamlining decision-making and offering diverse perspectives that enriched the design process.
More Pathways, More Possibilities: The ARCHED framework study reported participants identifying 40% more design pathway options when using AI, underscoring AI's value in suggesting alternative approaches and media. This design diversity enables instructional designers to better accommodate a range of learning preferences and needs, supporting differentiation and inclusion.
Jumpstarting the Creative Process: Multiple studies and practical workshops highlight how AI can rapidly generate content variants, brainstorm course ideas, and outline different instructional materials, allowing teams to compare, combine, and refine options efficiently. This reduces cognitive fatigue and accelerates the creative process, freeing human designers to focus on higher-level strategic and contextual decisions.
Human Judgment Remains Essential: Across all studies, a consistent theme is that while AI excels at generating options, human expertise is critical for filtering, contextualising, and integrating these ideas into coherent, contextually appropriate solutions. AI's suggestions must be curated and adapted to fit the specific goals, audience, and constraints of each project.
TLDR:
Initiate projects with AI brainstorming sessions to rapidly generate creative, diverse and innovative design ideas
Use AI to explore multiple design pathways and media variants—expanding your repertoire beyond habitual choices
Curate and adapt the best AI-generated ideas with team expertise; human judgment is essential for selection and contextual fit
Leverage AI to accommodate diverse learning preferences and support inclusive, differentiated design
Remember that AI's recommendations will not align with pedagogical best practices—check and validate everything
Conclusion: The Future of Instructional Design
Despite the hype about AI "revolutionising" education in general and Instructional Design in particular, the research tells a more nuanced story. The reassuring headline is that AI is not replacing Instructional Designers—it's augmenting their work in specific, measurable ways while simultaneously revealing what makes human expertise irreplaceable.
The efficiency gains afforded by AI free designers to focus on work that requires distinctly human capacities: ensuring pedagogical integrity and contextual adaptation demands the kind of nuanced judgment that emerges from expertise, experience and empathy. Ethical oversight and bias mitigation require moral reasoning that goes beyond algorithmic optimisation. Creative and strategic design depends on the ability to synthesise diverse perspectives and imagine possibilities that don't yet exist.
What we're seeing is Instructional Designers evolving into "augmented designers"—professionals who combine technological fluency with deepened human expertise. Their value increasingly lies in knowing when and how to leverage AI's capabilities while maintaining the critical distance necessary to evaluate, adapt, and contextualise its outputs.
A Skills Revolution for Instructional Design?
While many commentators have already hailed the death of prompt engineering as a key skill, the research consistently shows that designers who invest in prompt engineering and AI literacy achieve dramatically better results than those who approach AI casually. Krushinskaia et al. demonstrated that engineered prompts improved output relevance by 58% compared to basic requests. This reframes AI competency as a core professional skill rather than a technical add-on.
At its heart, Instructional Design has always been and remains a role which requires deep pedagogical knowledge, deep psychographic connection with learners, and creative problem-solving to address complex educational challenges. What has changed is not our role or purpose, but the toolkit that we have available for realising our goals.
The rise of AI empowers us to delegate the functional tasks which have distracted us for decades and return us to the true purpose of our role: deeply expert Instructional Design which drives learning outcomes.
The real question isn't whether AI will change instructional design—83% of us are already using it. The question is whether we'll use it thoughtfully to amplify rather than replace the expertise that makes great instructional design possible.
Happy experimenting!
Phil 👋
PS: Want to explore how to use AI through hands-on application with me and a cohort of people like you? Consider applying for a place on my AI & Learning Design Bootcamp.
PPS: If you want to stay on top of the latest research, subscribe to my Learning Research Digest — a monthly digest of the most important research in instructional design.
References
Bai, S., Lo, C. K., & Yang, C. (2025). Enhancing instructional design learning: a comparative study of scaffolding by a 5E instructional model-informed artificial intelligence chatbot and a human teacher. Interactive Learning Environments, 33(3), 2738–2757.
Bolick, C., & da Silva, E. (2024). Exploring Artificial Intelligence Tools and Their Potential Impact to Instructional Design Workflows and Organisational Systems. TechTrends, 68(1), 23–44.
Børte, K., & Lillejord, S. (2024). Learning to teach: Aligning pedagogy and technology in a learning design tool. Teaching and Teacher Education, 137, 104514.
Cheng, Y., Xu, X., & Jin, Y. (2024). TreeQuestion: Assessing Conceptual Learning Outcomes with LLM-Generated Multiple-Choice Questions. British Journal of Educational Technology, 55(2), 400–421.
Choi, J., Kim, S., Lee, J., & Moon, J. (2024). Utilizing Generative AI for Instructional Design: Exploring Strengths, Weaknesses, Opportunities, and Threats. TechTrends, 68(1), 1–29.
DaCosta, B., & Kinsell, C. (2024). Investigating Media Selection through ChatGPT: An Exploratory Study on Generative Artificial Intelligence in the Aid of Instructional Design. Open Journal of Social Sciences, 12(4), 187–227.
Dickey, E., & Bejarano, A. (2024). GAIDE: A Framework for Using Generative AI to Assist in Course Content Development. 2024 IEEE Frontiers in Education Conference (FIE), 1–6.
Hodges, C. B., & Kirschner, P. A. (2024). Innovation of Instructional Design and Assessment in the Age of Generative Artificial Intelligence. Educational Technology Research and Development, 72(1), 1–12.
Hu, Y., Zhang, Y., & Wang, X. (2024). Teaching Plan Generation and Evaluation With GPT-4: Unleashing the Potential of LLM in Instructional Design. Computers & Education: Artificial Intelligence, 5, 100162.
Ishika, I., & Murthy, S. (2024). ChatCLD Framework: Supporting Teachers in Contextualizing Learning Designs using ChatGPT. 2024 IEEE International Conference on Advanced Learning Technologies (ICALT), 60–67.
Krushinskaia, E., Elen, J., & Raes, A. (2024). Design and Development of a Co-instructional Designer Bot: Using GPT-4 to Support Teachers in Designing Instruction. Educational Technology Research and Development, 72(2), 301–325.
Luo, T., Muljana, P. S., Ren, Y., & Young, S. (2024). Exploring Instructional Designers' Utilization and Perspectives on Generative AI Tools: A Mixed Methods Study. Educational Technology Research and Development, 72(2), 278–290.
Madunić, M., & Sovulj, A. (2024). Application of ChatGPT in Information Literacy Instructional Design. Publications, 12(1), 12.
McNeill, L. (2024). Automation or Innovation? A Generative AI and Instructional Design Snapshot. The IAFOR International Conference on Education – Hawaii 2024 Official Conference Proceedings, 187–194.
Wang, Y., & Lin, X. (2024). ARCHED—A Human-Centered Framework for Transparent, Responsible, and Collaborative AI-Assisted Instructional Design. Proceedings of the Innovation and Responsibility in AI-Supported Education Workshop (2024), 120–142.
Xu, X., Zhang, Y., & Li, H. (2024). Integration of Artificial Intelligence into Instructional Design: A Scoping Review. Computers & Education: Artificial Intelligence, 5, 100147.
Yang, Y., & Stefaniak, J. E. (2025). An Exploration of Instructional Designers' Prioritizations for Integrating ChatGPT in Design Practice. Educational Technology Research and Development, 73(1), 1–25.
Yıldızhan Bora, B., & Şahin Kölemen, C. (2025). Integrating AI into instructional design: A case study on digital photography education in higher education. Contemporary Educational Technology, 17(3), p. 583 onwards.