Structured Prompting for Educators
Experiments in how to get the best out of your fave gen AI tools
Hey friends! In the last two weeks I’ve been lucky enough to spend time with learning experts at Deloitte & KPMG exploring, among other things, the art and science of prompting.
One thing I’ve learned along the way is that an increasing number of L&D professionals are using generative AI in their day to day work. However, what’s also clear is that they’re using it with mixed results.
The real game-changers are those who've mastered the art of communicating effectively with AI systems - aka those who have ventured into the world of structured prompting.
In this week’s post, I'll share some reflections how to use structured prompting to help to optimise your AI skillz.
Conversational Prompting
Most gen AI tools operate like chatbots, which means that we tend to interact with them conversationally, as if they’re another person. This is both one of the strengths but also the risks of generative AI: the point of entry for using it is basically zero, but it encourages us to interact with it in ways which aren’t optimal.
One thing I’ve observed while watching L&D professionals in action over the last couple of weeks is that they tend to default (like most of us) to unstructured, conversational prompting.
Here’s a typical example:
And here are just a few initial observations of ChatGPT’s output:
Woah, that’s a LOT of objectives. ChatGPT is very eager to please. You need to talk to it like an apprentice and give it clear direction. If you don’t, it will get carried away.
The objectives aren’t well sequenced. Generally, the objectives are pretty well-written; they each have clear purpose by making good use of verbs like define, explain, analyse etc. They also do a good job of situating the learning in the real world and keeping the experience active. But, the objectives are not well sequenced. They jump around from low-level “describe” and “define” objectives to higher-lever “create” and “evaluate” objectives and then back again. As a learning experience, this isn’t optimal for either motivation or mastery.
They’re not optimised for learner motivation. Research shows that when we direct objectives at the learner and include a statement of the “why” (e.g. by including “so that you can [real world impact]”, it helps to optimise the experience for both motivation or mastery. Because this isn’t standard or typical practice, however, AI doesn’t do it.
These are just a few observations. The TLDR is that unstructured, conversational prompting is not the optimal way to work with AI.
So what’s the alternative?
Structured Prompting
As the name suggests, structured prompting is a method of giving AI very intentionally structured and worded prompts to (we hope) optimise the quality of its output.
Since AI isn’t really built to do this, it will take experimentation and effort to make a structured prompt work consistently (it is very hard to reach 100% consistency), but there are helpful frameworks to get you started.
The one I’ve been experimenting with over the last couple of weeks is Gianluca Mauro’s CIDI framework. It goes like this:
Context: Define AI’s role and objective. For example, “You are an instructional designer assisting in X.”
Instructions: Provide step-by-step tasks. E.g., “First do X, then Y.”
Details: Set parameters like “The course should be online and no longer than X hours.”
Input: Incorporate relevant data or documents.
So what happens when we try the same task using more structured prompting? Here’s the prompt I tried:
Context: You are an instructional designer who is expert in writing learning objectives.
Instructions: First, I will give you information about my learners, their start point and end point (ZPD) and what I want them to learn and why. You will use this information to write me a set of learning objectives which are tailored to my and my learners' needs and goals.
Details: Every learning objectives must:
1. Be project-based.
2. Increase in complexity from most simple to most complex at a rate that is appropriate to the learner's ZPD.
3. Be directed at the learner, e.g. start with "You will XYZ."
4. Motivate the learner by stating the why, e.g. "You will XYZ so that you can [something of real world value to my learner]."
5. All be achievable within 3 hours
Input:
LEARNER: junior sales representatives with an appetite to move to more senior roles.
TOPIC Basic financial acumen.
GOAL: Decrease the sales team's costs by 5%.
ZPD: No prior experience. The learner’s goal is to get a basic understanding and some basic application.
Here are the results:
Key Take Aways
Structured prompting doesn’t produce perfect results, but it can have a significant positive impact on the quality of AI’s outputs.
Think of AI as an apprentice. Assume it knows nothing. Give it very clear instructions on what to do, how to do it and in what sequence. If there are common errors you’re trying to avoid, tell it what it must NOT do as well as what it must do.
AI needs you! You are the source of the context, instructions, details and input that AI needs to produce its best possible work.
Happy prompting!
Phil 👋
P.S. If you want some hands-on experience with AI in a safe and supported environment, join me and other educators and learning professionals on an upcoming cohort of my AI-Powered Learning Science Bootcamp.