Unless you live in a vacuum, you will have seen the news this week that OpenAI have released a new “assistants” function. Put simply, this development means that it’s now possible for anyone who can use ChatGPT to “build a chatbot”.
Since its release, I’ve spent some time digging into the new assistants feature to ask:
What are AI “assistants” and how do you build them?
How might educators use AI assistants?
What do AI assistants mean for the role of human educators?
Here’s what I’ve learned so far!
What Are AI Assistants (and how do I build one)?
AI assistants feel like the sort of autonomous AI-sidekick that we imagine with both delight and horror: a world in which we can reproduce ourselves and build AIs which are capable of accomplishing tasks on their own.
As wild as this may sound, in practice it isn’t too different from existing ChatGPT functionality: enter some instructions and, perhaps, some reference content and get an output. Indeed, having played with it quite a bit this week, I think the key development of OpenAI’s Assistants is less about the tech and more about who can use it.
In short, OpenAI have reduced the point of entry to developing an AI bot to more or less zero, making it possible for anyone who has basic prompting skills to build their own AI.
The process is pretty simple:
Open OpenAI Playground
Select “Assistants”
Add a set of instructions which tells the AI what to do, how and why aka a prompt, (ideally a structured one using something like Gianluca Mauro’s CIDI approach)
Select a model. As a rule, GPT3 = free and fast, GPT4 = paid for, slower but higher quality outputs
If it helps the AI to complete your task, you can also a) switch on plug-ins like Code Interpreter and/or b) upload some source data, e.g. a document
Hit save, and say hi to the bot in the message panel to try it out!
How Could Educators Use AI Assistants?
The good news is that there are a tonne of potential powerful assistant use cases for educators. Here are just a few that I have played with this week:
Curriculum Mapping Assistant
Function: This agent would analyse existing course materials, standards, and learning outcomes to suggest alignments and identify gaps in the curriculum.
Data References: Course syllabi, educational standards, learning outcome databases, and current course content (e.g., lecture notes, assignments, assessments).
Learning Analytics Assistant
Function: It would collect and analyse learner performance data to provide insights into learning trends, predict at-risk students, and suggest interventions.
Data References: Student grades, quiz and exam scores, learning management system (LMS) activity logs, and student feedback.
Feedback Generation Assistant
Function: This assistant would assist in providing personalised feedback on student assignments by analysing submission content against rubrics and exemplars. Check out Ethan Mollick’s experiment with this!
Data References: Student submissions, grading rubrics, exemplar assignments, and instructors’ past feedback for maintaining consistency in tone and expectations.
Content Update Assistant
Function: Automatically reviews and suggests updates to course materials based on the latest research findings, publication updates, and current real-world examples.
Data References: Academic journals, news feeds, databases of recent publications in relevant fields, and online educational resources.
Accessibility Compliance Assistant
Function: Reviews all course materials to ensure they meet accessibility standards, suggesting modifications where necessary.
Data References: Accessibility guidelines (e.g., WCAG), course materials in various formats (text, video, audio), and metadata descriptions for content.
What do AI Assistants Mean for Human Educators?
This is all great news, right? Well, maybe….
Just like ChatGPT, the power of assistants depends on our ability to prompt and manage them. GPT assistants might feel autonomous, but they aren’t - yet.
When experimenting this week, I spent a lot of time refining my instructions to get the outputs I wanted. And just like ChatGPT, the assistant still has hallucinations which meant that I also spent a lot of time validating often questionable outputs.
The same risks around data privacy exist too: being able to upload reference content for an assistant is great, but we need to be mindful of what sorts of information we’re sharing and who we’re sharing it with.
The human-assistant relationship may feel private and exclusive, but it isn’t.
In a world where the barrier to entry for building AI assistants is now close to zero, humans are more important that ever. Several key skills become crucial for humans to develop in order to a) build and b) buy and use these assistants effectively and responsibly.
Here are my top four tips for how to build and use AI assistants responsibly:
Develop Domain Knowledge: A deep understanding of the specific field or subject where the AI assistant is applied is essential. This expertise allows the creator or user of the AI to provide relevant and accurate data for the assistant to learn from, and to assess the quality of its outputs. The ability to recognise what is missing in the AI's knowledge or capabilities. Subject matter experts can identify the nuances and context that the AI does not capture and find ways to provide this information.
Learn How to Curate and “Feed” Data: The ability to build, select and/or curate high-quality, representative datasets for the AI to use is also essential. The quality of the output from an AI assistant is heavily dependent on the quality of the input data.
Error Detection & Quality Control: Being able to identify when the AI’s outputs are incorrect or when the assistant is "hallucinating" remains critical. This is particularly important because AI assistants and the nature of our relationships with them mean that they can be especially convincing in their outputs, even when they are wrong. Implementing robust testing and validation processes to ensure the AI assistant are performing as intended and continuously monitoring AI performance and making adjustments as needed is critical.
Critical Analysis: The capacity to critically analyse and evaluate the purpose, design, and output of AI assistants created by others is critical. This involves questioning the underlying assumptions, data sources, and algorithms used.
Concluding Thoughts
Reducing the barrier to entry for building AI assistants to close to zero comes with both opportunities and risks.
The inevitable rise of AI assistants in education will herald a new chapter where the boundaries of teaching and learning are continually expanded. As we harness these technologies, humans will play a critical part in ensuring that we we leverage the opportunities and mitigate the risks of artificial intelligence.
The question remains not if, but how we will choose to navigate the exciting but risky world of AI + education.
Happy innovating!
Phil 👋
PS: If you design learning experiences and want to get hands on and experiment with AI supported by me, you can apply for a place on an upcoming cohort of my AI-Powered Learning Science Bootcamp here.