Ten Questions about AI and Higher Education
The Sorcerer's Apprentice in Disney's Fantasia is over-used when discussing AI and education. Despite this it still illuminates deep questions we still don't have good answers to, and probably should.
Whenever I feel I’m getting a handle on the intersection between AI and higher education, I find my base assumptions challenged. And this past week has been no exception — especially with the non-launch of Anthropic’s Mythos Preview.
But it’s also a week where some serious inspiration has come from an unexpected source. And this led to me sitting down and hashing out ten questions about AI and higher education that I don’t have good answers to, but I think we should be taking seriously if we believe in truly empowering our students to thrive in the future we’re building.
The unexpected source was, intriguingly, Disney’s 1940 film Fantasia, and in particular the Sorcerer’s Apprentice scene where Mickey Mouse gets caught out flexing power without understanding.
I’m currently working with the film as backdrop for exploring AI from multiple perspectives — much as I used science fiction movies in the book Films from the Future — and will be writing more about this at some point. But for this week the story of Mickey reveling in the unearned and undisciplined power of sorcery provided the perfect catalyst to crystallize the questions swirling around my mind.
Before we get to the ten questions, it’s worth noting that the Sorcerer’s Apprentice scene is something of a cliché when it comes to discussing the possible dangers of wielding technological power with limited understanding — especially when it comes to AI. It’s almost too easy to project comparisons onto the animation and to then use these as illustrative warnings — the lure of lazy technology-enabled shortcuts; the seduction of frictionless power; the myopia of starting something without thinking about the consequences.
But when the scene is engaged with more deeply, it has a surprising power to open up ways of thinking about AI in higher education that are not obvious at first, and that eschew the convenient cautionary tales usually attached to the story.
Rather than provide a long explanation of how the scene led to each question and what each reveals about the evolving relationship between AI and higher education, I thought I’d simply list them as conversation-starters and see where they go.
But because each one does have a connection at some level to Disney’s Sorcerer’s Apprentice, it is worth watching (or re-watching) the scene first, and thinking about the questions in the light of it:
With that, here are ten questions inspired by this sequence that are challenging me at the moment:
Ten Questions about AI and Higher Education:
What does competency mean in an age of AI?
What does success mean in an age of AI?
How can we help students avoid the illusion of understanding and ability when using AI?
What happens in a future where students become AI masters, and teachers AI apprentices?
When AI promises near-frictionless mastery of a subject, what is the value of pursuing mastery without it?
What do we owe our students in an age of AI?
What does it mean to model mature AI use?
How do we empower our students to navigate the challenges and affordances of AI-enabled efficiency?
How do we help our students stretch their imagination in an age of AI, without being subsumed by the technology?
How do we provide students with the tools and skills to discover and embrace what it means to be human in an age of AI?
Postscript
Of course, having teased you with the video and the questions, I couldn’t just leave it there!
These questions are — in case you hadn’t realized yet — not the usual suspects when it comes to grappling with the intersection between AI and higher education. They don’t, for instance, focus on things like cheating, AI-proofing assignments, creating syllabi and assignments with generative AI, using bots in class, and the many other challenges and opportunities that exploded onto the scene when OpenAI launched ChatGPT in 2022.
These are still current topics of conversation in many colleges and universities. But emerging AI capabilities are opening up new possibilities that make questions like these feel increasingly out of touch.
These include LLMs that can interpret, plan, and carry out tasks while checking their work to avoid errors; platforms like OpenAI’s Codex and Anthropic’s Claude Code that can deploy teams of agents to research and complete complex tasks; AI agents that can cross-reference and extract insights from more resources than a single person could possibly handle alone; AI apps that can generate prose that are indistinguishable from humans; AI agents that can take the place of students working on course assignments; systems that can deploy hundreds of AI agents and achieve the equivalent of thousands of hours of human-equivalent work overnight.
And that’s just the start.
In a week where Anthropic decided not to release its latest model to the public because it’s too powerful, AI systems are beginning to make the idea that AI and education is just about navigating a single prompt window in a web browser feel deeply anachronistic.
To be clear, I am not talking about artificial general intelligence (AGI) or “superintelligence” here — both of which are rather ill-defined concepts. Rather, I’m talking about the emergence of transformative technological capabilities that we simply cannot comprehend the full capabilities of, and yet already offer near-frictionless access to power that transcends our understanding.
These capabilities are deeply intertwined with — and influential on — how we think, how we learn, how we develop our understanding of ourselves and the world we live in, and how we imagine and create the futures we aspire to.
If we aren’t taking this seriously in higher education — either as an existential threat or as an opportunity unlike any that we’ve previously faced — we risk being swept away in the coming AI tsunami.
And this brings us back to Fantasia.
In the Sorcerer’s Apprentice scene, Mickey risks being swept away by the consequences of his naivety. But in a world where transformative AI touches everything we do, ignoring its power may be just as naive as wielding it without understanding.
Mickey's mistake wasn't only that he embraced sorcery he didn't understand — it's that he acted with certainty unshaken by his ignorance. Is the modern equivalent acting with equal certainty in the other direction: trusting that traditional mastery alone is enough, while the water keeps rising?
And there’s the eleventh, bonus question 🙂



