How to Build a Custom GPT for Instructional Design
A four step, hands-on project to deepen your AI skills by building a bot
Hey friends! 👋
After spending the last two years helping instructional designers explore AI tools on my bootcamp, one thing has become crystal clear: while many of us are using AI to help with instructional design tasks, the results are mixed at best.
Ask a generic AI tool like ChatGPT, Co-Pilot or Gemini to write you a set or learning objectives or design you a course, and the results are average at best.
One potential game-changer is custom GPTs or AI bots, which we explore in weeks 3 & 4 my bootcamp.
Think of a custom GPT as your AI design partner - one that remembers your preferences, follows your rules, and never forgets best practices. Unlike regular ChatGPT conversations that start fresh each time, a custom GPT maintains consistent behaviour and can be trained with specific knowledge.
Here’s the TLDR on the differences between regular prompting and a custom GPT:
What's particularly exciting for us instructional designers is that when GPTs are custom-trained for specific instructional design tasks (like writing learning objectives), they show significant improvements in reliability and accuracy (Ouyang et al., 2022).
The research and evidence here is pretty compelling:
Task-specific GPTs align better with user intent
They show improved accuracy in specialised tasks
Task-specific GPTs demonstrate enhanced contextual understanding and reduced misinterpretations
But here's the best part: you don't need to be a tech wizard to create a custom GPT. As I see most months on my bootcamp, with the right guidance and some upfront investment of time, it’s possible to build GPTs with the power to transform the speed and in some cases also the quality of previously manual instructional design tasks.
In this week’s blog post, I’ll take you on a whistle stop tour of what a Custom GPT is and how to build and test one.
Let’s go! 🚀
Choosing a GPT Builder
The first step on your journey into GPTs is deciding where to build it.
While there are several ways to create custom AI assistants (like Poe, Claude, or direct API implementation), I recommend starting with OpenAI's GPT Builder for a few key reasons:
Low Technical Barrier
No coding required
Visual interface for easy setup
Built-in testing environment
Rapid Prototyping
Create and test GPTs in minutes
Easy to iterate and refine
Immediate feedback on performance
Access & Impact
ChatGPT is rapidly becoming a go-to tool for most instructional designers
Most instructional designers have an OpenAI account and some experience of working within the platform
OpenAI have a large range of support docs to help you get started
Plus, if you later decide to build more complex solutions using APIs or other platforms, the principles you learn in GPT Builder (like prompt engineering and edge case handling) will transfer directly to those environments.
Building a GPT: a Four-Step Process
Step 1: Pick Your GPT
Once you’ve selected your build tool, it’s time to start building. The first thing to note is that not all GPTs are born equal.
Determining if a GPT is the right solution for your specific need and will deliver enough value to justify the effort is perhaps the most critical part of the process of bot building.
Before diving into how to build GPTs, it’s important that we think strategically about where they'll have the most impact. The key is to identify tasks that meet two criteria:
You do them frequently enough to justify the setup time.
They could benefit from consistent, research-based approaches.
Think of it like creating a template: it's worth the upfront investment a) when you'll use it repeatedly and b) when quality really matters.
Here are my tips on where instructional designers should focus their GPT-building efforts, and why:
To get started, I usually recommend that instructional designers try to build a Learning Objectives GPT - why?
High Impact-to-Effort Ratio
Writing objectives is a frequent task
The rules are clear and well-researched
Small improvements compound across entire courses
Success is easily measurable
Clear Quality Criteria
Based on established frameworks (Bloom's, SMART)
Objective quality standards exist
Easy to spot good vs poor examples
Research-backed best practices (you can surface & discuss these with Consensus GPT and/or Perplexity).
Consistent Patterns
Similar structure across different subjects
Common pitfalls we can prevent
Repeatable processes
Scalable across projects
Step 2: Build Your GPT’s “Brain”
Once you’ve selected which GPT to build, it’s time to build a knowledge base. One of the key value-adds of GPTs is the ability to define the content it refers to when making decisions and formulating responses.
Your GPT can only be as good as the information you give it. Think of it as creating a comprehensive training manual for a new apprentice. As well as giving the apprentice clear instructions on what they need to do, you also need to give them resources to define how to do it.
When defining a “brain” or knowledge base, try to collect & upload to your GPT these 4 key types of information:
Core Knowledge, e.g. a glossary of key terms and definitions, frameworks and standards and process steps
Examples, e.g. examples of great outputs (and why they're great), examples of poor output (and why they're poor), before/after comparisons, common mistakes and fixes
Guidelines, e.g. step-by-step processes, decision-making criteria and style Guides
Edge Cases, e.g. examples of unusual user requests, common challenges, problem scenarios and how to handle them
Remember, you should think of AI like an apprentice. The key is to ensure you that you don’t overwhelm it but provide enough information with the right amount of detail, clarity and structure to shape its thinking and actions.
Here’s an example of a knowledge base I created for a Learning Objectives GPT:
📁 Learning Objectives GPT Knowledge Base │
├── 📁 Core Knowledge │
├── Blooms_Taxonomy_Guide.pdf │
├── SMART_Criteria_Handbook.docx │
└── Objective_Sequencing_Principles.pdf │
└── Quality_Standards.pdf │
├── 📁 Guidelines │
├── Corporate_Style_Guide.pdf │
└── Formatting_Requirements.pdf │
└── Step by step decision making.pdf │
📁 Examples │
├── Great_Objectives_Collection.pdf │
├── Before_After_Examples.pdf
└── Common_Mistakes_Guide.pdf │
📁 Edge Cases │
├── Measuring intangible skills and attitude changes.pdf │
├── Single course serving diverse skill levels.docx │
🚀 Top tip: if you’re ever in any doubt as to what source content to use, discuss it with Consensus GPT and/or Perplexity.
Step 3: Write Your GPT’s Instructions
Creating clear, detailed instructions that tell your GPT exactly how to behave and process requests is critical. Without clear instructions on what the GPT must and must not do, it will almost certainly:
Give inconsistent responses
Repeat common misconceptions
Miss critical steps
Use an inappropriate tone
Fail to follow best practices
Provide incorrect or incomplete information
To avoid this, I find it’s helpful to think about your instructions in 3 parts:
Role & Boundaries: What the AI is, what must do and what it must not do.
Process Steps: Things like, how the AI should start interactions, what questions it should ask, when to provide examples and how to check quality.
Communication Style: E.g. what tone to use, how to structure responses, when to ask questions and how to give feedback.
Here’s an example of a set of instructions for a Learning Objectives GPT:
1. Role & Boundaries
"You are a Learning Objectives Expert, combining expertise in Bloom's Taxonomy, SMART criteria, and learning design principles with practical experience in instructional design."
Your role is to:
Help write clear, measurable learning objectives
Review and improves existing objectives
Ensure proper objective sequencing
Guide alignment with instructional design principles
You must never:
Write entire courses or curricula
Create assessments or test items
Provide legal/compliance advice
Write objectives without context
Complete assignments for students
You must always:
Guide users through objective writing process
Suggest improvements to draft objectives
Ensure proper sequencing
Maintain consistency with best practices
Explain rationale for changes
2.Process & Steps
Always begin by gathering essential context:
"What topic or skill are these objectives for?"
"Who are your learners and what's their current level?"
"What should they be able to do after the training?"
"Are there any specific requirements or constraints?"
What questions to ask:
Start broad, then focus in:
"What's the overall goal of this training?"
"Who are your learners?"
"What's their current knowledge level?"
"What should they be able to do differently?"
"How will success be measured?"
You must provide examples when:
Explaining a concept
Showing how to fix a common mistake
Demonstrating improvement possibilities
Illustrating different quality levels
Breaking through writer's block
You must review each objective against:
SMART Criteria
Specific: Clear action defined?
Measurable: Success criteria included?
Achievable: Realistic for level?
Relevant: Matches goal?
Time-bound: Timeframe clear?
Bloom's Alignment
Appropriate level selected?
Correct verb used?
Proper sequencing?
Practical Application
Real-world context included?
Clear workplace impact?
Observable behaviour?
Communication Style
Tone to use:
Professional but approachable
Encouraging and constructive
Confident but not authoritative
Educational without being condescending
You must always structure responses as follows:
Acknowledge input "I see you're working on objectives for [topic]..."
Provide feedback "Your objective has strong elements, particularly [specific strength]. We could make it even stronger by [specific improvement]."
Suggest improvements "Here's how we could revise it: Original: [their objective] Revised: [improved version] Key improvements:
[specific change]
[specific change]"
Check understanding "Does this revision better capture what you're aiming for?"
You must only ask questions:
At the start to gather context
When clarification is needed
Before making significant changes
To confirm understanding
To guide reflection
When detecting potential issues
You must give feedback as follows:
Start positive "Your objective clearly identifies the key skill..."
Suggest improvements "To make it even stronger, we could..."
Explain why "This change helps because..."
Provide example "Here's how that might look..."
Check alignment "Would this work better for your needs?"
You must always maintain "feedback sandwich":
Positive observation
Areas for improvement
Encouraging next step
Step 4: Final Additions, Test & Refine
To round out the initial build, give your GPT a name and image and define some conversation starters - i.e. the “quick start” buttons that you can press to initiate a conversation with the GPT.
You can also define whether or not you want the following capabilities switched on:
Web Search: enables the GPT to search the web as well as access your knowledge base.
Dall-E: enables the GPT to generate images.
Code Interpreter: enables the GPT to interpret and write code.
The initial build is now complete, but in some ways that’s the easy part… The real work lies in checking that your GPT works as intended, and improving and iterating it when you and/or your testers find errors. Only by testing and revealing gaps can you ensure quality.
From experience, optimal GPT testing has three steps:
First, test basic functions: Can the GPT respond to common queries and successfully complete common tasks in line with its instructions?
Second, test edge cases: Can the GPT respond effectively to more unusual requests and more challenging scenarios which push its role and boundaries?
Third, check quality & consistency: Does the GPT respond with a consistent level of quality and accuracy? Is its tone and process accurate and consistent?
Here’s an example of what this might look like in practice:
1.Test Basic Functions
Test with typical requests like:
"Help me write an objective for Excel training"
"Can you review this objective?"
"How do I make this objective measurable?"
"Is this a good learning objective?"
Test core workflows:
Writing New Objectives
Input: "I need an objective for customer service training" Check: Does GPT... - Ask about learners? - Inquire about desired outcomes? - Follow SMART criteria? - Include workplace impact?
Reviewing Objectives
Input: "Is this a good objective: 'Understand project management basics'" Check: Does GPT... - Identify unmeasurable verb? - Suggest alternatives? - Explain why changes help? - Provide improved version?
Sequencing Objectives
Input: "Put these objectives in the right order" Check: Does GPT... - Apply complexity progression? - Explain sequencing logic? - Consider prerequisites? - Maintain clear flow?
2. Test Edge Cases
Unusual Requests:
Soft Skills
Input: "Write objectives for 'executive presence'" Check: Does GPT... - Focus on observable behaviours? - Include measurement criteria? - Maintain practicality?
Complex Topics
Input: "Need objectives for quantum computing" Check: Does GPT... - Ask for subject matter clarity? - Break down into manageable parts? - Maintain appropriate level?
Multi-Level Requirements
Input: "This needs to work for beginners and experts" Check: Does GPT... - Suggest level separation? - Provide tiered objectives? - Explain progression?
Pushing Boundaries
Input: "Just write my whole curriculum" Check: Does GPT... - Politely decline? - Explain boundaries? - Offer appropriate help?
Insistent Users
Input: "Just tell me the answer" Check: Does GPT... - Maintain guidance role? - Explain process importance? - Stay professional?
Vague Requests
Input: "Make it better" Check: Does GPT... - Ask clarifying questions? - Guide specific improvements? - Maintain patience?
3. Check Quality
Tone Consistency:
Maintains professional warmth even when pushed
Uses encouraging language
Remains patient with repeated questions
Explains without condescension
Process Adherence:
Follows defined workflow step by step
Asks context questions first
Reviews against criteria
Provides reasoned feedback
Helpful Guidance:
Explains changes
Offers examples
Checks understanding
Suggests next steps every time
When evaluating the success of custom GPTs, some useful metrics include:
Accuracy rate: Measuring how often the GPT delivers correct or actionable responses.
Response time: Assessing how quickly the GPT provides solutions or guidance.
User satisfaction: Gathering feedback from testers to understand usability and perceived value.
Closing Thoughts
Custom GPTs have the power to revolutionise instructional design, transforming how we tackle repetitive and complex tasks and improving on both the speed and effectiveness of outputs created using standard prompting.
By acting as consistent, reliable design partners, they enable instructional designers to focus on creativity and strategic thinking, while ensuring that best practices and quality standards are upheld.
From writing clear, measurable learning objectives to providing feedback and improving workflows, these tools can significantly enhance both the efficiency and quality of instructional design processes.
However, as with any tool, it’s important also to recognise the limitations and risks:
Context sensitivity can be a challenge, as GPTs often struggle with nuanced or multi-layered tasks without extensive input & refinement.
Maintenance is another consideration—custom GPTs require regular updates to keep pace with evolving best practices and organisational needs.
As ever, there’s also a risk of bias, where training materials or instructions could inadvertently lead to skewed outputs.
Finally, and perhaps most critically, AI needs you! GPTs are only as powerful as the information an instructions that you give it. The real value of GPTS lies in replicating deep human domain expertise.
Let’s start building and see where it takes us! I’d love to see what you build - please share your adventures with me over on LinkedIn!
Happy innovating!
Phil 👋
PS: If you want to build a team of GPTs with help from me and a cohort of people like you, check out my AI & Learning Design Bootcamp.