Discover more from Dr Phil's Newsletter, Powered by DOMS™️ AI
How & Why Most Learning Experiences Fail Their Learners
The case for an evidence-based set of quality standards for learning design, powered by AI
Last year, I did some research which found that 97% of online courses fail.
Fail: a learning experience which is not optimised for learner motivation, learning gain [knowledge & skills acquisition] or mastery [deep understanding & expertise].
With a little help from AI, I’ve recently run some additional analysis of over 300 learning experiences. The sample included:
In the flesh, fully online & blended learning experiences.
Learning experiences from HE, K12, corporate training, MOOCs and both free and paid-for courses hosted on platforms like Udemy, Teachable etc.
The result? A whopping 98.8% of the learning experiences analysed were not optimised for learner motivation, learning gain or mastery.
In this post, I’ll explain how I got to this number and make a case for the creation of an evidence-based set of quality standards for learning design.
Learning from Other Industries
Most professions - from construction to manufacturing, technology, healthcare, finance and food production - have industry-wide quality standards which set out criteria that are used as a benchmark to ensure optimal levels of quality.
These standards have two functions:
They provide guidelines to inform for how a product should designed (Quality Assurance)
They provide a checklist of indicators of quality to assess the quality of a product before it goes to market (Quality Control).
If I want to build a house here in the UK, I first have to draw up a plan which shows that my idea meets a set of standards designed to ensure standardised levels of structural safety, energy efficiency and accessibility.
These standards are valuable because they:
Ensure the quality & reliability of the end products: standards help ensure that products or services meet certain minimum requirements.
Promote efficiency & effectiveness: standards help streamline processes and increase efficiency within an industry, providing a common framework for everyone to follow.
Promote consistency: standards help ensure that all contributors within an industry are operating on a level playing field, providing a set of guidelines that apply to all everyone equally.
Facilitate interoperability & compatibility: standards help ensure that different products or services are compatible with each other, leading to accelerated connection, knowledge-sharing and industry-wide growth.
The Problem With Quality Standards in Education
The “quality” of learning experiences, online or otherwise, is typically based on two post-course measures:
Course Completion - the % of people who complete a learning experience
Learner Satisfaction - the rating of the experience by the learner
Both measures are problematic:
Course completion indicates learner persistence, but not necessarily learning gain. A learner might complete a learning experience but learn nothing from it. Another learner might complete 10% of a learning experience and learn something.
Similarly, learner satisfaction indicates learner enjoyment but not necessarily learning gain. A learner may enjoy an experience but learn nothing. Another learner might struggle and not enjoy the experience, but gain something in the process.
These guidelines provide learning designers with some helpful indications of what a successful learning experience looks like, but they are limited in two fundamental ways:
First, they describe what good looks like, but don’t specify how to achieve good. It’s like saying, “You need to build a house that’s safe” but providing no specific criteria or guidance on how to ensure safety.
Second, they don’t keep pace with the research. It’s the equivalent of some new research emerging on how to prevent house fires, but failing to reflect the findings in safety regulations.
Let’s take feedback as an example.
Thanks to the research we also know a lot about what optimised feedback looks like. Among other things, factors which lead to the optimisation of feedback include:
Timing, distribution & cadence
Content & structure
Mode of delivery, e.g. verbal Vs written feedback
Despite this, the Quality Matters rubric for Higher Ed has a single standard for feedback which simply states that, “The course provides learners with multiple opportunities to track their learning progress with timely feedback”.
In short: learning science research has provided us with a detailed understanding of the what, why and how of effective feedback, but in the vast majority of cases this information does not translate into learning design practices or the existing rubrics for quality assurance which underpin them.
An Evidence-Based Set of Quality Standards for Learning Experience Design?
A question I asked myself a while back was: what if, following the lead of others industries, we created an evidence-based set of standards for learning design?
Years of research, testing and iteration later, I have recently launched and started to test DOMS™️ - an evidence-based process for learning design.
Why DOMS™️? Learning science research shows that there are three success-critical stages to design of a learning experience:
Discovery - understanding your learners’ needs & motivations
Objectives & Mapping - defining, writing & sequencing learning goals
Storyboarding - selecting the flow of content, activity, feedback, assessment & interaction
DOMS™️ breaks hundreds pieces of always-up-to-date research down into a set of standards and guidance for the learning design process, follows:
Discovery - a set of evidence-based standards & guidance to optimise the relationship between what the learner needs and what the experience delivers.
Objectives & Mapping - a set of evidence-based standards & guidance to optimise the way that objectives are written, positioned & sequenced.
Storyboarding - a set of evidence-based standards & guidance to optimise the quantity, form and flow of content, activity, feedback, assessment & interaction.
The purpose ofDOMS™️ is to make it easy for anyone to apply the science of learning to learning experience design. It does this in three ways:
It provides an evidence-based process for learning design - think, ADDIE with a PhD.
It provides an evidence-based set of standards & guidelines for the optimisation of each step of the learning design process (quality assurance).
It provides an evidence-based set of “end of the factory line” standards for assessing the quality and likely impact of each stage of a designed learning experience (quality control).
By using a set of evidence-based standards like DOMS™️ it’s possible to review a learning experience and assess the extent to which it’s optimised for learner motivation, learning gain or learning mastery, according to what we know from learning science research.
Standards like DOMS™️ also help us to dig down into the detail of a learning experience, identify precisely how and where it is and isn’t optimised and make changes to drive improvements in learner motivation and mastery. As my research has shown, this is a much needed exercise across all types of learning experience.
A Note on the Potential of AI
No post in written in January 2023 would be complete without a shoutout for AI!
The big question I’ve been exploring over the past couple of weeks is: is it possible to build a version of ChatGPT powered by learning science? What would an AI-powered version of DOMS™️ look like, and how might it impact our ability to improve the quality and impact of online, blended and in the flesh learning experiences?
The initial results of my research are very exciting - more on this to come very soon!
Closing Thoughts & Question
Having a set of quality standards can help ensure that products and services meet a certain level of quality and performance, but they also inevitably come with an amount of risk.
One question I continue to explore is: do quality standards stifle or encourage innovation & creativity?
If everyone was required to conform to a set of evidence-based standards for learning design, would it make it easier to deliver high-impact innovations [no more finger in the wind designing], or would it more difficult for new ideas and approaches to emerge?
I’d love to hear your thoughts.
PS: You can learn more about DOMS™️ on my website. If you want to get hands-on and try using DOMS™️ for yourself, you can apply for a place on my:
Learning Science Bootcamp: a four-week design adventure where you can work with me and a cohort of people like you to design an evidence-based learning experience using the DOMS™️ learning design engine.
Learning Science Sprint (coming soon): a self-paced, video-based intro to the DOMS™️ process, principles & practices.
AI for Educators Sprint: (coming soon): a self-paced, video-based intro to how to leverage AI for good by using it design evidence-based learning experiences.
Thanks for reading The Learning Science Newsletter, Powered by DOMS™️! Subscribe for free to receive new posts and support my work.