Hi Folks!
We recently completed the May cohort of my AI & Learning Design Bootcamp — and what a cohort it was! Over four intensive weeks, 20+ educators, learning designers and I explored practical AI applications across our day-to-day practice—from curriculum design and content generation to automated feedback and predictive evaluation.
Among the 30+ AI use cases we try and test in the bootcamp, one has emerged as particularly popular and potentially significant: using AI to simulate real learners' behaviour and feedback.
Imagine having access to your target learners' thoughts, behaviours and emotional responses throughout your entire design process. While this may seem like science fiction (and, perhaps, risky), recent research by Park et al. (2024) demonstrates that AI personas built from intentionally gathered, structured data can predict learner behaviour more accurately than traditional human-centred approaches, achieving approximately 85% accuracy in behavioural simulation and prediction.
Recent research demonstrates that AI personas built from intentionally gathered, structured data can predict learner behaviour more accurately than traditional human-centred approaches.
In this week's blog post, I examine this research, share what we have discovered on the bootcamp and - based on what we learned - provide a systematic guide to enable you to build a high-quality AI learner persona in 5 steps.
Let's dive in! 🚀
The Research: AI Persona Reliability & Performance
Recent research by Park et al. (2024) provides compelling evidence that AI personas, when constructed from rich interview data, can simulate human attitudes and behaviours with remarkable precision.

Importantly, the implications of Park et al’s research extend beyond mere accuracy and reliability; their findings suggest that AI (specifically, GPT 4o) might already be capable of addressing many of the systematic challenges that characterise traditional learner research approaches:
Predictive accuracy: AI personas constructed from detailed interview transcripts achieved 85% accuracy in predicting participants' survey responses—substantially higher than typical inter-rater reliability in qualitative research.
Behavioural consistency: Simulated behaviorus demonstrated a 0.98 correlation with real-world choices across multiple scenarios and time periods.
Bias reduction: Interview-based personas exhibited significantly lower demographic stereotyping compared to traditional persona methods that rely on demographic assumptions.
Robustness: Even when researchers removed 80% of the interview data, interview-based personas continued to outperform demographic-only approaches.
Implications for Instructional Design Practice
While the empirical findings from Park et al. (2024) are impressive in their own right, their true significance lies in what they suggest about the practical challenges facing instructional designers today.
For decades, we've accepted the inherent limitations of traditional learner research—limited access, inconsistent responses, and systematic biases—as simply "the way things are." However, recent findings indicate that AI personas may offer solutions to problems we've long considered intractable in our field:
Availability and accessibility: Traditional learner research often requires weeks or months to coordinate participant schedules, secure organisational approvals, and navigate competing priorities. AI personas provide 24/7 availability for iterative testing without the sampling biases that constrain access to actual learners.
Temporal consistency: Instructional designers frequently encounter the challenge of participants responding differently to identical questions depending on their mood, workload, or recent experiences during data collection. Unlike human participants, AI personas maintain psychological consistency regardless of when consultations occur.
Emotional state modelling: Conventional research methods typically capture learners in a single emotional state at one point in time, missing how stress, confidence, or frustration might alter their learning behaviors. Well-constructed personas can simulate learner reactions under different emotional conditions—stress, confidence, frustration, curiosity.
Reduced social desirability bias: Focus groups and interviews often produce responses that participants believe sound "professional" or appropriate rather than reflecting their genuine experiences and preferences. AI personas based on private interviews often reveal more authentic perspectives than participants share in formal research settings.
AI Personas in Action: What We’ve Discovered on the Bootcamp (So Far)
To move beyond theoretical understanding, I designed a systematic experiment within my bootcamp to test AI persona reliability under real-world instructional design conditions.
Our Testing Hypothesis
Before testing starts, we create a hypothesis something like: AI personas built from rich interview data would demonstrate superior consistency and accessibility compared to traditional learner research methods, while maintaining comparable insight quality.
We also make a number of predictions of what the AI persona will do / how it will behave, e.g.
The AI Learner Persona Will….
Provide more consistent responses across multiple testing sessions than human participants
Offer immediate availability for design iteration without logistical barriers
Reveal authentic behavioural patterns without social desirability bias
Maintain psychological coherence when simulating different emotional states
Experimentation so far has revealed that AI personas, when constructed using rigorous methodology, can provide more reliable insights than many traditional approaches—not through superior accuracy compared to human participants, but by eliminating systematic biases and accessibility barriers that constrain conventional learner research.
Participants have reported that AI personas enable them to:
Conduct immediate design testing without extended delays for learner availability and institutional approval processes
Simulate various emotional states that single-point-in-time survey methodologies cannot capture
Maintain psychological consistency across multiple testing scenarios and timeframes
Access authentic insights without social desirability bias influencing participant responses
Predict learner reactions before committing resources to development phases.
Of course, working with AI also comes with sometimes significant risks, which must be managed. These include:
Technical Domain Handling
The original research notes that AI personas may default to generic responses in highly technical or specialised domains if not provided with sufficient domain-specific interview data.
For instructional design, this means that personas are most reliable for simulating general attitudes, behaviours, and soft skills, but may be less accurate for technical training (e.g., compliance training, advanced software skills) unless interviews specifically probe for technical expertise.
Instructional designers should supplement AI personas with targeted expert input or additional interviews when developing content for highly technical domains.
Cultural Transferability
While AI personas can reduce some forms of bias, most validation research derives from Western, English-speaking populations.
The original study quantified performance variations across demographic groups and found that reliability may vary significantly in global or multicultural contexts. For global training programs, instructional designers should:
Conduct interviews with a culturally diverse sample
Regularly test personas against feedback from learners in different regions
Be cautious about overgeneralising insights from a single cultural context.
Temporal Stability
The study emphasises that interview data represents a static snapshot, and that personas may become less accurate as learner needs, organisational culture, or technology change. To maintain reliability, we must:
Establish a protocol for periodic persona updates (e.g., after major course rollouts, annually, or when new data emerges)
Continuously validate persona predictions against real learner behaviour and feedback
Treat personas as dynamic, evolving tools rather than static profiles.
Quick-Start Guide: Build Your First AI Learner Persona, in 5 Steps
So, taking all of this into account, how might you try this for yourself? Here's a quick start guide to enable you to build and test an AI learner persona for yourself.
Download a full version of the step by step guide here.
Step 1: Define Your Target Learner
Before you can build an effective AI learner persona, you need to know exactly who you're looking for. Many personas fail because they focus on demographics (age, job title, education) rather than the psychological factors that actually drive learning behaviour.
Build your target profile around these six research-backed dimensions:
Contextual pressures — Why they're taking this training, what's at stake professionally, what competing priorities they're managing
Behavioural patterns — How they typically handle feedback, seek help when confused, manage time pressure, and respond to authority
Cognitive preferences — How they process new information, what increases or decreases their confidence, how they prefer to practice new skills
Emotional triggers — What frustrates them in learning environments, what motivates them to persist, how they handle failure or confusion
Past experiences — Specific training successes and failures, existing knowledge and skills, relationships with managers and peers, career trajectory and aspirations
Constraints and resources — Technology access, time availability, organisational culture, and family obligations
🚀 Phil's Tip: Use this profiling approach to identify who to interview - look for people whose actual situation matches this psychological sketch, not just their job title.
Step 2: Conduct a High-Fidelity Interview (with one or more real humans!)
This is where many AI persona projects fail. You cannot simulate the interview step—it must involve a real human who matches your target profile.
Your goal: Capture 6,000-7,000 words of rich narrative data. This number isn't arbitrary—research shows this is the minimum threshold to generate reliable persona behaviours, responses & insights. As a rule, more input = better output.
How: Schedule 90-120 minutes with one or more learners, using structured question categories: broad life context, learning experiences, emotional and motivational factors, and specific behavioural preferences.
🚀 Phil's Tip: Record the conversation (with permission) so you can focus on listening. When something sparks emotional response, dig deeper with "Tell me more about that" or "What else?"
Step 3: Create Structured Persona Data
Transform your interview transcript into structured data that your AI can draw from when simulating responses. Transcribe your interview then (with help from Perplexity, Claude, ChatGPT or similar) run a semantic analysis and organise content into: episode categorisation, emotional mapping and behavioural anchors.
🚀 Phil's Tip: Copy and paste relevant quotes under each heading—don't summarise yet. Keep contradictions as they're often the most psychologically revealing parts of your persona.
Step 4: Test Your Persona's Reliability with Structured Simulation
How: Choose your AI platform of choice (ideally GPT 4o—the model tested with Park et al) and use this prompt template:
Context: You are [NAME], a target learner on a training about [TRAINING INFO], learner persona based on the detailed interview data provided below. You are a real person with specific experiences, emotions, behavioural patterns, and motivations. Stay in character throughout our conversation and reference specific stories and experiences from your background when making decisions or expressing reactions.
Instructions: I will present you with learning scenarios, content, and situations. For each scenario, you must always:a. React authentically based on your established personality, past experiences, and emotional patterns
b. Reference specific memories or experiences from your interview when explaining your responses
c. Show realistic emotional reactions that align with your documented triggers and motivations
d. Make decisions that reflect your established behavioral patterns and constraints e. Explain your internal thought process, including any assumptions, concerns, or emotional responsesDetails:
Always ground your responses in specific experiences mentioned in your interview data
If you feel stressed, confident, frustrated, or curious, explain why based on your past experiences
Consider your contextual pressures (time, competing priorities, workplace culture) when responding
Input:
Interview data: [STRUCTURED DATA FROM PREVIOUS STEP]
Scenario: [SPECIFIC LEARNING SCENARIO WITH ENVIRONMENTAL CONTEXT]
Question: [SPECIFIC QUESTION ABOUT THOUGHTS, FEELINGS, OR NEXT ACTIONS]
Then test with structured scenarios including environmental context setting, emotional state variation, time pressure modelling, social context variation, and failure response testing.
🚀 Phil's Tips:
A reliable AI persona should maintain psychological & linguistic consistency while adapting to new experiences
Test the same scenario with different emotional states to understand how context affects learning behaviour
Test your persona's predictions against real learner actions or feedback
Step 5: Test the Persona In Your Day-to-Day Work
Once you've built and validated your AI persona, integrate it into various relevant stages of your instructional design process across all ADDIE phases: Analysis, Design, Development, Implementation, and Evaluation.
Here are some examples to get you started:
Analysis - Understanding needs before the course exists
Simulate detailed conversations about training experiences, fears, and expectations
Test prompts like: "What would make you hesitate to sign up for this course?" or "It's your first week on the job and you've been asked to complete this course by Friday. What are you thinking?"
Discover psychological drivers that surveys miss, like fear of irrelevance being stronger than fear of difficulty
Design - Testing course structure and flow
Share early outlines and learning objectives to get anticipatory feedback on structure and emotional impact
Ask: "Here's the proposed course outline. What parts feel overwhelming or unclear?" or "Given your past experience with group projects, how would you feel about this collaborative assignment?"
Identify structural issues like overly theoretical introductions causing disengagement or group work creating anxiety
Development - Testing working prototypes
When generating course materials or feedback, use your persona’s profile to create content, activity, comms etc that reflects their specific needs, challenges and communication style.
Feed your persona complete lesson materials and simulate their moment-by-moment learning experience
Present actual content and ask: "You're 3 minutes into this video. What's happening in your mind right now?" or "You just got 3 out of 5 correct. The feedback says 'Review sections 2 and 4.' What's your emotional response?"
Catch micro-interaction problems like generic feedback triggering negative spirals or cognitive overload accumulating unpredictably
Implementation - Optimising support systems
Predict likely points of confusion, disengagement, or technical difficulty by simulating various scenarios
Test with: "You're stuck on this concept and suggested resources aren't helping. You have 20 minutes left in your lunch break. What do you do?"
Design proactive support that feels helpful rather than intrusive, and feedback systems that acknowledge effort before offering corrections
Evaluation - Predicting outcomes and satisfaction
Have your persona "complete" the entire course & simulate post-training reflection and real-world application
Ask: "It's been two weeks since you finished this course. Your manager asks what you learned. How do you respond?" or "You're facing a situation where this training should be relevant. How confident do you feel?"
Predict satisfaction and transfer issues before launch when addressing them is still feasible
🚀 Phil's Tip: Start each design session by "checking in" with your persona about the decisions you're considering. Their psychological consistency will help you catch problems your expert perspective might miss.
Conclusion: Using AI to Drive Substantive Improvements in Instructional Design
While it’s still very early days, systematic implementation and evaluation with bootcamp participants suggests that AI personas represent a potentially transformative, methodologically sound advancement in learner-centered design—not through replacement of human insight, but by making human-centered design more accessible and more systematic.
For those who have built AI learner personas, the impact has proved significant: designers who previously relied on stakeholder opinions and assumptions gained access to psychologically consistent learner perspectives throughout their complete design process. They could test concepts immediately, explore emotional reactions, and identify potential issues while remediation remained feasible.
This emergence of reliable AI personas indicates a fundamental shift in how we approach instructional design:
From reactive to predictive design: Rather than designing based on past experiences and receiving feedback after development, we can now simulate learner responses before committing resources to development phases.
From generic to psychologically informed design: Instead of treating learners as cognitive processing units with preferences, we can engage with them as complete psychological entities with complex motivations, concerns, and contextual pressures.
From assumption-based to evidence-based iteration: AI personas provide methodologically sound alternatives between expensive learner research and assumption-driven decisions, offering evidence-based insights with superior reliability compared to conjecture while maintaining greater accessibility than comprehensive learner studies.
These are the implications for instructional design, but what about the implication for instructional designers? Effective implementation of AI personas requires developing new competencies alongside traditional instructional design capabilities, including:
interview and qualitative research skills
prompt engineering for behavioural simulation
critical evaluation of AI outputs
integration with human feedback methodologies.
There are implications for how we think about ethics tool. If / when AI personas become integrated into design practice, several ethical considerations require systematic attention:
Privacy and consent: Interview participants must understand that their narratives will form the basis for AI simulation.
Bias mitigation: Regular review by diverse perspectives can identify biases that designers may overlook.
Representation and power dynamics: Employees accessible for interviews may not represent the complete diversity of eventual learners.
The replacement risk: AI personas should enhance understanding of learners, not substitute for authentic human feedback.
As so often seems to be the case, the most successful instructional designers of the (near) future will not be those who resist AI implementation or those who adopt it uncritically: success will belong to those who develop effective AI collaboration skills while maintaining deep and critical expertise as the central focus of their practice.
If you're interested in exploring these techniques through hands-on application—along with ~30 other practical AI applications for learning design—consider applying for a place on my AI & Learning Design Bootcamp.
Until then: design systematically, test rigorously, and simulate with methodological precision.
Happy innovating!
Phil 👋
PS: You can download a full, PDF version of the step by step guide here.