Accelerating Excellence?
Initial findings from my research into the impact of AI on the speed & quality of learning design.
Over the past couple of months, I’ve been working with AI expert, Gianluca Mauro, and a team of learning designers to run controlled tests on the impact of AI on both the speed and quality of learning design. The research tests the following hypothesis:
Specialised AI tools trained on evidence-based learning design principles can outperform on speed and quality a) traditional human-only learning design processes and b) Chat-GPT assisted learning design processes.
To test this hypothesis, Gianluca Mauro and I have worked with hundreds of learning designers to build an AI prototype, which we’ve called Epiphany.
The research is in its early stages and ongoing, but in the spirit of “researching in public”, here’s what we’ve learned so far by comparing the speed and quality of designs produced by learning designers across three different design methodologies:
Manual design (human-only, no AI)
Assisted by a general AI tool (human + ChatGPT)
Assisted by a specialised AI tool (human + Epiphany).
Accelerating Excellence? The Role of AI in Enhancing Learning Design
Research Context
Despite the potential for innovation within the learning and instructional design sector, its full benefits have yet to be realised. The majority of tools built for learning and instructional designers enable the creation of content rather than the creation of learning designs optimised for the learners and goals in question.
TLDR: the vast majority of learning experiences that we build and complete are not designed using an evidence-based approach and, as a result, not optimised for motivation and impact.
To use the analogy of house building: as learning designers we usually (often through necessity) go straight to build without having an expert architect create a blueprint first. Houses appear quickly, but most of them fall down.
Our study aims to test the hypothesis that by building a tool specifically for the design part of the learning design, we can transform both the speed and quality of the “pre authoring” part of the learning design process.
The study explores how specific AI applications impact both design quality and speed, with the aim of contributing to our collective understanding of the value and impact of AI in the context of learning design and, ultimately, learner outcomes.
Methodological Approach
The research is being completed in collaboration with two groups:
Learning Designers (Testers): A mixed group of learning designers with varying levels of learning design experience was tasked with creating educational learning designs. Their backgrounds and familiarity with AI tools were considered in participant selection to ensure a representative sample, with stratified random sampling used to minimise biases.
Learning Designers (Evaluators): A group of expert learning designers has been recruited to evaluate the learning designs against predefined criteria. These experts blind-score the designs based on a pre-defined set of quality criteria based on learning science research (more info below).
Our analysis involves comparing the performance of the tester group across three methods:
Method 1 - Human-only: the learning designer turns a design brief into a design storyboard (i.e. a “production ready” design) using their established methods and process.
Method 2 - Human + General AI (ChatGPT): a learning designer turns a design brief into a storyboard using their established methods and process with assistance from ChatGPT.
Method 3 - Human + Specialised AI (Epiphany): a learning designer turns a design brief into a storyboard using their established methods and process with assistance from a specialised AI tool.
Data is collected from learning designs created under controlled conditions by designers of varying experience levels and subsequently evaluated by experts. After the design part of the test:
Quality is measured by a team of independent expert learning designers, who blind-score designs using a rubric based on learning science research with 6 categories: instructional strategy, learning objectives, content, activity, feedback & assessment.
Speed is measured by the same team of independent expert learning designers, who assess the degree of “completeness” of the storyboard as part of the scoring exercise.
Initial Findings
After our first round of testing with 35 testers, results indicate that specialised AI tools trained to enable evidence-based learning design have the potential to significantly improve both the speed and quality of learning design:
Quality: Learning designs created using Epiphany achieved an average quality score of 4.08 on a 5-point scale, compared to 3.22 for ChatGPT-assisted designs and 2.37 for manual designs.
Speed: The “time to design” (i.e. storyboard) when using Epiphany was on average 72% faster than designing with ChatGPT and 368% faster than traditional, human-only methods.
Implications for Learning Designers
It’s very early days, but preliminary findings suggest some significant potential for the application of both general and specialised AI in the field of learning and instructional design.
With the right prompting and validation, general AI tools like ChatGPT show significant potential in increasing both the speed and quality of learning designs.
Meanwhile, specialised tools trained specifically to streamline the workflow and enable evidence-based decision-making for learning designers could have a potentially transformative effect on both efficiency (“time to design”) and effectiveness (impact on outcomes).
TLDR: Where traditionally evidence-based design methods have proven too inaccessible and time consuming to be “productised”, with help from AI it might be possible to empower learning designers to make evidence design decisions quickly and scale high quality, high impact learning design practices.
In some ways, this moment reminds me of the rise of Figma in 2016 - a tool that cohered and accelerated the UX/UI profession by empowering UX designers with specialised features which both streamlined the design process and created new and higher standards for design quality.
The rise of Figma shifted the critical skill set for UX professionals away from the functional and administrative elements of design towards effective collaboration, design-system thinking and a deeper understanding of human motivation, psychology and behaviour.
In a similar way, the rise of more specialised AI tools for learning designers could see a shift away from functional design tasks (think: emails, meetings, admin, content creation) and towards more deeply specialised skills in AI literacy, data interpretation, pedagogical innovation and behavioural science.
Next Steps
Gianluca and I will run more tests in the coming weeks. In the process we’ll dive deeper into some of the detail around the specific impact of specific AI functions on the speed and quality of the work of specific profiles of learning designers.
I’ll keep you updated on what we learn along the way.
In the meantime, if you’d like to get involved, you can apply to be one of our expert learning design testers here.
Happy experimenting!
Phil 👋
PS: If you want to learn more and get hands-on with AI, you can apply for a place on my AI Learning Design Bootcamp.
This looks fascinating Phil. Traditionally, I would have thought in terms of different templates within a learning design tool such as Storyline that were configured for teaching various types of content or knowledge (declarative, procedural, etc).
For example, teaching procedures would have a template that was configured as Show the Procedure; Try the Procedure (with appropriate scaffolding that was incrementally removed as the learner progressed); and finally, Test the Procedure (no scaffolding, the user completes the procedure or task unaided). Another example would be teaching principles with an in-built scenario type treatment.
Epiphany looks to take things way beyond this. As someone who works with AI for learning design (a lot of it learned in your AI-Powered Learning Design bootcamp) , I’d imagine Epiphany will generate learning strategies and designs that are 100% appropriate to the *particular* learning need and are designed for maximum transfer and effectiveness.
Looking forward to hearing more!