How Close is AI to Replacing Instructional Designers?
The Results Part 3: Creating a Course Outline
Hello friends! 👋
Welcome to the grand finale of our three-part series exploring the impact of AI on instructional design.
If you're just joining us, you might want to catch up on Part 1: Writing Learning Objectives and Part 2: Selecting Instructional Strategies.
Today, we're diving into what many consider to be the role-defining task of the instructional designer: creating a course design outline.
Recap & Results So Far
Before we jump into the results, let's refresh our memory on the experiment setup:
I asked three colleagues to complete three common ID tasks using the same course design brief:
Colleague 1: An experienced instructional designer (ID), working solo with no AI
Colleague 2: An ID "novice" with no ID experience, assisted by AI - specifically, ChatGPT 4.0 + Consensus GPT
Colleague 3: An experienced instructional designer, assisted by AI - specifically, ChatGPT 4.0 + Consensus GPT
Then, I asked ~200 instructional designers to blind-score the quality of the outputs and guess which were produced by an expert ID, expert ID + AI, or novice + AI.
In our previous two tests, we've seen some interesting patterns start to emerge:
Writing Learning Objectives: The expert ID + AI combination came out on top, with 71% of respondents rating their objectives as good or very good. Surprisingly, the novice + AI outperformed the experienced ID working alone.
Selecting Instructional Strategies: Again, the expert ID + AI duo led the pack, with 76% rating their strategy as very good or exceptional. The novice + AI impressed once more, with 79% rating their work as very good or exceptional, while the experienced ID working solo came in last.
TLDR: The results have consistently shown the power of AI assistance in enhancing ID work, regardless of the user's experience level. They've also highlighted a growing recognition of AI's potential in the field, alongside some persistent misconceptions about AI-assisted work.
So, how did the participants perform in their final task: creating course outlines?
Let’s go!
🥉 Third Place: Instructional Designer (no AI)
When it came to the perceived quality of course outlines, once again the experienced instructional designer working without AI came in third place.
However, the bar here was pretty high and the results weren't all bad:
40% of respondents rated their outline as good
24% considered it very good
5% thought it was exceptional
Interestingly, 52% of respondents correctly guessed that this was the work of a human ID without AI assistance.
Some key observations from the feedback:
"Not extremely detailed in outline; not tied to measurable objectives."
"Suitable outline, although brief."
"Full knowledge from the expert, but not delivered compellingly."
TLDR: This score suggests that many IDs associate AI-assisted work with higher levels of detail and polish, positioning AI as a helpful sidekick for primarily procedural and editorial support in the ID process.
🥈 Second Place: Novice + AI
Our AI-assisted novice once again impressed, coming in second place:
38% of respondents rated this course outline as good
42% considered it very good
6% thought it was exceptional
52% of respondents guessed that this as the work of a non-ID with AI assistance, suggesting that some people assume AI to have at least some understanding of instructional design - enough to be able to produce a good or very good course outline.
What did respondents like about this outline? Here’s a selection of their comments:
"Great detail, complex, and thorough outline."
"Appears to support the direction and goal with the varied learning to allow for different learning strengths."
"A much longer answer including some broad ranging topics that an ID without subject matter expertise might not have come up with."
"Well sequenced and phrased nicely."
Respondents’ comments suggest an association between the use of AI and:
an increased level of detail and structure;
a more developed understanding of the topic - i.e. AI is perceived by some to be able to play the part of a “virtual SME”.
TLDR: The perception of AI as both a structural aid and a "virtual SME" suggests that IDs see AI as capable of providing both procedural support (how to structure a course) and content knowledge. This data also reinforces what we've seen in previous tasks and seen reflected in other research: AI has most impact on the productivity and performance of novices (compared with experts).
🥇 First Place: Instructional Designer + AI
Once again, the winning combination proved to be the experienced instructional designer working with AI.
41% rated this course outline as very good, while 45% considered it to be exceptional:
An impressive 75% of respondents correctly identified this as the work of an expert ID with AI assistance: what made respondents this this was the work of an expert ID + AI?
For some, it was the perceived “human touch” which led to optimal quality. As one respondent said; "This feels much more human. The language is more unique. The objectives clearly tie back to each module."
For others, it was the combination of pedagogical expertise (perceived to be human in origin) and AI efficiency (demonstrated in detail and structure) which led to the conclusion that this was the work of expert + AI:
"Seemed more fleshed out, something I could do effectively and quickly using AI as an ID."
"It just feels like AI takes the content a strong ID would write to a whole new superior level -- in this case, outlining the objective achieved by module."
"The addition of objective tagging gives away that an ID was involved, but this has more detail than a human would typically produce (for this type of project)."
TLDR: The high rate of correct identification of this as the work of ID + AI (75%) suggests growing appreciation among IDs of how AI can augment expert work. The split in perceptions - some associating quality with the "human touch" and others with AI - highlights a tension in the perceived role and value of human ID Vs AI.
Key Observations: Creating Course Outlines
So, what did we learn overall from this third experiments?
Recognition of AI-assisted work: A significant majority correctly identified the expert ID + AI work, suggesting growing awareness of how AI can enhance ID outputs. Primarily, the value of AI’s input is seen to be procedural and editorial, e.g. summarising, structuring and outlining, rather than substantive. As one respondent put it, "It just feels like AI takes the content a strong ID would write to a whole new superior level -- in this case, outlining the objective achieved by module."
AI as a virtual SME: That said, some respondents see AI as having the ability to act as something of a "virtual SME”, perhaps suggesting a growing confidence in the reliability of AI’s outputs. It should be noted that confidence in AI’s outputs among respondents was highest when AI cited research that could be verified and validated. As one respondent said, "All the references to research makes me think Perplexity AI was leveraged here. I like how the approach is justified at the end -- it takes to a level I don't think an average human ID would without AI, and it certainly makes it more powerful as a recommendation."
Efficiency and effectiveness: A number of comments suggest that AI is considered to be a tool to help IDs work more quickly and effectively, potentially allowing for more time on higher-level design decisions; "Seemed more fleshed out, something I as an ID could do effectively and quickly using AI."
The perceived human touch remains valuable: Respondents consistently value highly the "human" elements of the course outline creation process. Among the items considered to be key signifiers of human work are:
“pro” ID tasks like connecting objectives to content and activities. As one respondent put it, “The objectives clearly tie back to each module”, which was enough to convince them that this is the work of a pro human ID.
the tone and use of language, suggesting generally low and misguided expectations of AI’s ability to communicate effectively. As one respondent put it, when as outline is "well sequenced and phrased nicely" it is considered to be the work of an expert human (even though it was written by AI). Another outline produced by AI was described as feeling, "… much more human [because] the language is more unique.
Key Take Aways: The Role of AI in Instructional Design
Looking at all three tasks - writing learning objectives, selecting instructional strategies, and creating course outlines - several overarching themes start to emerge:
AI outperforms human-only ID efforts: Across all tasks, AI-assisted work consistently produced higher-quality outputs than human-only efforts, even from experienced IDs.
AI capabilities exceed perceptions: AI demonstrated abilities in areas often considered uniquely human, such as creating well-structured, contextually appropriate content and linking objectives to modules effectively.
AI is an ID equaliser: Novices using AI often produced work comparable to or better than experienced IDs working alone, suggesting AI's potential to level the playing field in the field.
IDs value most what is perceived "human touch": Despite AI's strong performance, respondents consistently associated what they valued most with expert human input, even when these elements were actually AI-generated.
So, what does all of this tell us about the likely future of instructional design and the role of instructional designers?
If these findings are anything to go by, the future of instructional design likely most lies in a symbiotic relationship between human and AI. As is the case with all AI use cases, the specifics of that relationship will vary depending on the profile of the user.
Expert IDs: AI as Apprentice
For expert IDs, AI will likely operate as an “ID apprentice” which responds to clearly defined instructions informed by the ID’s expertise.
In this collaboration, the expert uses their domain knowledge and experience to give AI clear and structured instructions on both what they need to do, but also (and critically) how they need to do it. AI helps the expert to increase the efficiency and effectiveness of their day to day work, but the human in the loop is in the holder of the domain knowledge required to ensure that AI does the right thing in the right way.
In this relationship, two skills are equally critical:
Domain knowledge: an in-depth understanding of what great instructional design looks like. Garbage in, garbage out. Gold in, gold out.
Prompt Engineering: an in-depth understanding of how best to “talk to AI” in order to optimise its performance as an apprentice. It’s notable that in the cases where the expert ID worked with AI, the highest rated outputs were those where structured rather than conversational prompts were used.
Novice IDs: AI as Mentor
For novice IDs, AI operates less like and apprentice and more like an “ID mentor”, helping them to get to grips with the fundamentals - e.g. writing objectives - more quickly than ever before.
In this this collaboration, AI takes the role of the expert, using its domain knowledge of ID to augment the emerging ID’s performance. For many IDs, this is a scary scenario for two reasons - one practical, one more existential:
What if AI’s knowledge of instructional design is not optimal?
If AI can enable anyone to design a learning experience, what’s the point of me?
The answers to question 1 is that AI’s understanding of ID is definitely is not optimal. Generic AI tools like ChatGPT, CoPIlot and Claude are trained primarily on internet data. In practice, this means that AI’s understanding of ID is - by definition - average, rather than optimal.
This in turn helps us to answer question 2: in a post AI world, expert IDs - as holders of domain knowledge - play a more critical role than ever as “humans in the loop” whose job it is it to prompt and validate AI using their domain expertise.
Closing Thoughts
In a world where humans increasingly collaborate with AI, the role of human experience, knowledge and expertise in feeding, instructing and validating “the AI machine” is more critical than ever before.
As this short study has helped to confirm, while AI can and will play a key part in raising the quality bar for ID across the board (AI as ID mentor), optimal results come from the symbiotic relationship of expert human ID + AI (AI as ID apprentice).
In this sense, AI is a powerful force for the professionalisation of instructional design; by automating some of the more functional elements of our role (e.g. content generation) and placing renewed emphasis on the importance of deep domain knowledge, AI helps push us to a place that we have sought for decades where our focus lies more squarely on excellence and - ultimately - impact.
As we learn more about the impact of AI in practice, one thing keeps becoming clearer: AI is most powerful when used as a tool for augmentation, not automation.
At least for now, the optimal AI use case is one where an expert ID leans into the power of AI to work work smarter, faster, and more strategically. The combination of human expertise + AI assistance consistently produces the best results, indicating a near future where AI for augmentation rather than automation of human-powered ID.
Happy innovating!
Phil 👋
PS: Want to get hands-on experience and develop critical AI skills with me and other instructional designers? Check out my AI Learning Design Bootcamp. Spots for 2024 are filling up fast!