The Changing Relationship Between Education & Tech
What Google DeepMind’s AI for Learning Forum Signals About the Next Era of Ed-Tech
Hello folks 👋
Next week, I’ll be heading to London to take part in Google DeepMind’s AI for Learning Forum — a two day session that brings together a small number of AI leaders, educators and students to “co-create futures of AI and learning that are equitable and human-centred.”
This isn’t any old learning event. Around the table will be some of the biggest names AI— people who normally spend their days shaping the very infrastructure of the AI field. This includes:
Demis Hassabis, founder and CEO of Google DeepMind, is one of the architects of modern deep reinforcement learning. Demis in the leading mind behind AlphaGo, AlphaFold, and systems that have redefined what “learning” means for machines. His recent public focus on human learning — “meta-skills” and the ability to learn how to learn — marks a striking pivot from algorithmic mastery to educational purpose.
Jeff Dean, Google’s Chief Scientist, built the technical bedrock of the modern AI ecosystem: distributed systems like MapReduce, neural network frameworks like TensorFlow, and the Google Brain team that catalysed today’s deep learning boom. The fact that someone at his level of technical depth is now asking how people learn — not just how machines do — signals a new kind of curiosity inside Big AI.
Mustafa Suleyman, co-founder of DeepMind and now CEO of Microsoft AI, has long been known for bridging hard tech and ethics. His recent work focuses on AI alignment and human augmentation — questions that land squarely in the territory of learning, agency, and capability building.
James Manyika, Senior Vice President of Research, Technology & Society at Google, has spent years mapping how technology reshapes economies and labour. His presence connects the forum’s educational focus to the wider social question: what kinds of learning ecosystems does an AI-driven world actually need?
For years, these leaders have focused on how machines learn. Now they’re turning their attention to how humans learn with machines.
In this week’s blog post, I argue that the AI for Learning Forum and other initiatives like it are beginning to signal a deeper shift in ed tech: one that moves us away from ‘How do we build then sell tech into classrooms?’ and toward a very different conversation: ‘How can tech improve how humans learn, and how do we design that future together?’”
Let’s dive in!
The Old System: Tech Builds, Education Adopts
For the last few decades, the relationship between education and technology has been one-way. Technologists built tools; educators were invited (and often had to be heavily persuaded) to adopt them. Meanwhile, the end users, i.e. learners, rarely had any say at all.
In this model, the implicit questions driving the industry weren’t, “How do we enable people to lear more effectively?” or “What do learners need?”. Instead, it was: “What will education leaders pay for?” and “How can we sell this technology into classrooms?”.
In the traditional ed-tech model, the people least consulted in the process of designing and building ed-tech products were the ones who understand how learning actually works and had direct experience of what learners actually need.
That framing led to a flood of digital tools built primarily around efficiency rather than pedagogy — platforms that automated grading, digitised worksheets, and replaced textbooks and handouts with PDFs. These systems often promised “personalised learning,” but what they mostly delivered was a new way to store and deliver content.
As education researchers have pointed out, this model produces limited impact because it treats learning as a logistics problem rather than the complex pedagogical, social, cognitive and emotional problem it is in reality. Studies of large-scale ed-tech adoption (e.g. by Reich, 2020; Selwyn, 2016) have shown that the majority of technology built for education increase access but rarely deepen understanding or engagement.
A vivid example of this was during the pandemic, when hundreds of millions of students moved to online platforms like Google Classroom, Zoom, and Teams. These tools kept learning going, but not necessarily growing. When evaluated later, most institutions reported a net-negative impact on learning outcomes when these technologies became a more core part of the learning process (OECD, 2021).
Why is this? In part, it’s because traditionally we have built ed tech systems and solutions outside the realities of learning. As MIT’s Justin Reich observed in Failure to Disrupt, when developers and educators operate in silos, “ed-tech innovation tends to circle the same problems of scale, efficiency, and delivery, without touching what makes learning meaningful.”

In short, the established system treats education leaders as customers, educators as end-users, and learners as data points. In the traditional ed-tech model, the people least consulted in the process of designing and building ed-tech products were the ones who understand how learning actually works and had direct experience of what learners actually need.
The big question I’m interested in is — is this [finally] changing?
The New System: From Procurement to Partnership
Next week’s Google forum is, in many ways, a live experiment in doing things differently. The agenda itself tells an important story.
At the event, there won’t be sales stands or flashy product sales demos. Instead, we (educators),technologists and learners will start with a blank sheet and explore questions like:
What is the purpose of human learning?
What are the optimal conditions required for human to learn?
What does it take to make AI-powered learning genuinely equitable, not just available?
How can we involve students in shaping the tools that will shape them?
Those aren’t questions about adoption or procurement - they’re questions about co-creation. Events like these seek to define what “good” looks like when tech shows up in learning, and how we make sure learners and educators are authoring that future, not just adapting to it.
At the heart of this shift is co-creation — the idea that educators, learners, and technologists must design together from the outset. In design research, this is called participatory design (Sanders & Stappers, 2008): a model where end-users are collaborators, not customers.
We’ve seen this pattern before in other domains. When engineers in medical technology began working directly with physicians and patients, the quality and usability of medical devices improved dramatically (Wilcox et al., 2019). Similarly, agricultural “digital farms” in the UK have combined data scientists, agronomists, and farmers in shared experiments — producing innovations that are scientifically sound and practically workable.
My hope is that the same model is now emerging in education. Beyond the Google event, other initiatives which signal this trend:
1.Microsoft’s Elevate strategy is one clear example. The company has committed $4B over five years to Elevate, a global AI education and training initiative that aims to help 20 million people earn AI-related credentials.
Crucially, Elevate isn’t just about pushing more Microsoft tools into education systems; it explicitly commits to co-designing solutions with schools, universities, and non-profits. AI agents and Copilot features for learning platforms are now rolled out only after rounds of iterative work with educators, and bespoke lesson-building experiences (such as Project Spark-style pilots) are being shaped directly by teacher input. The language has shifted from “AI tools for teachers” to “AI tools built with teachers,” with digital equity and educator agency foregrounded rather than treated as afterthoughts.
2.OpenAI’s higher education partnerships point in the same direction. The NextGenAI consortium brings together leading institutions like Caltech, MIT, Oxford and others, not just as customers but as R&D partners, with funding and tools explicitly earmarked for joint work on research and education. Faculty and students help define use cases, ethical protocols, and product scope, with shared governance structures rather than one-way licensing deals.

3.At the same time, ChatGPT Edu and university collaborations (from Arizona State to Oxford and the California State system) frame OpenAI less as a vendor and more as a platform for co-developing new learning experiences: local policies, guardrails, and pedagogical goals are shaped on campus, not handed down from San Francisco.
These moves sit within a broader industry pattern. Recent reports and policy work on AI in education like that by UCL highlight teacher agency, student voice, and ethical governance as central design requirements, not “nice-to-haves”. Co-design, feedback loops with educators, and student-driven customisation are increasingly described as markers of successful EdTech, not fringe practice.
Seen together, these initiates seem to signal a broader realignment: the biggest players in AI are beginning to recognise that sustainable innovation in education can’t be built for schools—it has to be built with them.
Implications for Educators and Designers
So what does all this mean if you’re an educator, learning designer, or L&D leader?
For me, three things stand out:
1. Pedagogy is becoming a design input.
In the old model, educators got involved once the technology was finished. In practice, this meant that pedagogy and the science of how humans learn were, at best, an afterthought. In the new model, pedagogical expertise belongs at the start of the design process, not the end.
Emerging ed-tech systems — from tutoring models to classroom analytics — are being co-created right now, and the people who understand how humans actually learn are increasingly indispensable. If you’ve ever wished product teams “just understood how teaching works,” this is your moment to make sure they do.
So what can you actually do?
Re-centre your craft: Revisit core learning science (scaffolding, feedback, retrieval, transfer, motivation) and be ready to explain it in plain language to non-educators. Your “back to basics” is someone else’s revelation.
Show up in design conversations: When your institution, vendor, or IT team talks about AI or new platforms, treat it as a curriculum design conversation, not just a tools one. Ask to be in the room early.
Bring one non-negotiable principle with you: For example, “Learners must always get a chance to think before seeing an answer,” or “Feedback should target process, not just correctness.” Use that as a lens on every proposed feature.
Pedagogy isn’t the thing you bolt on after the tech is built. It’s the brief.
2. Instructional design is becoming systems design.
If the signals are reliable, tomorrow’s instructional designer won’t just design content and courses; they’ll co-design interactions between humans and intelligent systems with computer scientists. That means translating principles you already know — cognitive load, scaffolding, motivation, feedback — into how AI tutors, copilots, or assessment tools actually behave.
So what can you do in practice?
Map your principles onto AI behaviours: Take one course or module and ask, “If an AI tutor sat alongside me here, what would good look like?” When should it prompt? When should it wait? When should it nudge, and when should it stay silent?
Prototype “rules of engagement” for AI: Draft simple guidelines like, “AI offers hints before answers,” or “AI feedback must include at least one metacognitive prompt.” These can become design requirements when you talk to vendors or internal tech teams.
Design the whole system, not just the content: Think about the flow: learner → AI → teacher → learner. Where does the teacher see what the AI did? Where can the learner override or question it? Sketch that as deliberately as you’d storyboard a lesson.
Your job isn’t just to drop AI into existing designs — it’s to shape the ecosystem that will rebuild ed-tech from the ground up.
3. The new skill isn’t just “AI literacy” — it’s design thinking.
In the emerging ed-tech landscape, you don’t need to become a machine-learning engineer. But you do need to think like a designer: someone who can frame problems, centre users, and iterate toward better solutions.
That means being able to sit at the table and guide questions like:
What problem are we solving?
How will we know if this improves learning, not just efficiency?
Whose voices are missing from the design process?
That’s not “learning to prompt.” That’s applying design thinking to how AI gets made and used. To make this real in your own context:
Practice asking “why this, for whom?” Any time a new AI tool is proposed, ask: “Which learner problem is this solving, and how do we know they see it as a problem?” Start with user needs, not features.
Bring evidence, not just opinions. When you question a feature, connect it to research or experience: “We know from X that students need time on retrieval; how will this tool support that rather than shortcut it?” That’s classic design thinking: testing ideas against real-world constraints and evidence.
Pull others into the conversation. Involve learners, TAs, or frontline teachers in feedback sessions. Even something as simple as “Let’s ask three students what they’d want this to do” is a design move: you’re widening the circle of insight.
The more often educators and designers show up with this mindset — problem-first, user-centred, evidence-informed, collaborative — the more likely we are to get AI that actually supports learning, not just AI that looks impressive in a demo.
Closing Thoughts
For years, ed-tech was something that happened to education and educators: tools were built elsewhere, sold into systems, bolted onto practice. This is still happening today, but next week’s AI for Learning Forum — alongside partnerships like Microsoft–DfE and ASU–OpenAI — suggests something different might also be starting to emerge: a world where the people who understand learning best are no longer on the receiving end, but at the design table.
That doesn’t mean the future is guaranteed to be better. It just means there is finally space and potential for it to be.
In London next week, I’ll be in a room with some of the people who built the foundations of modern AI, asking what it means to help humans learn with machines. I’ll be bringing into that room the same things you work with every day: hard-won pedagogical wisdom, messy realities from real learners, and awkward questions about power, equity, and purpose.
I’ll leave you with this:
If technologists are finally ready to co-create with us, what’s the one principle, question, or story from your practice that you want at the centre of that design?
I’ll be carrying mine into the forum next week. If you’re willing, comment on my LinkedIn post to share yours — I’d love to bring some of yours into the room with me too.
Happy innovating!
Phil 👋
PS: Want to work with me to explore the role of AI in how we analyse, design, develop and evaluate learning? Apply for a place on my AI & Learning Design bootcamp.


