TL;DR (generated by Claude, lightly edited by me).
- Universities have historically been the gatekeepers and validators of expertise through granting degrees and credentials, holding a monopoly on cultivating and recognising expertise.
- But, the rise of generative AI has seen the emergence of ubiquitous, cheap access to specialised knowledge and practical expertise.
- We are therefore moving from a paradigm of information abundance (enabled by the internet) to one of expertise abundance (enabled by AI), disrupting universities’ role as the main validators of expertise.
- Students and AI form an integrated knowledge ecosystem that can develop expertise in the absence of a human teacher, and attempting to separate the student from the AI are misguided.
- Furthermore, the scale, ubiquity, integrated nature, personalisation, and cost reductions of AI pose a unique existential threat to the pedagogical model of universities.
- However, universities are dependant on their position as the sole arbiters of expertise, and are understandably resistant to changes disrupting their gate-keeping role.
- New education systems built around AI-first, personalised learning, authentic assessment, and focus on skills are likely to emerge, threatening the increasingly precarious role of universities as the gold standard for regulated learning.
The rise of abundant expertise in the form of generative AI questions the university monopoly on expertise provision and validation. Leadership in the creative deployment of AI for learning, teaching, and assessment will require a change in mindset and a shift towards a new paradigm, which universities have traditionally found hard to manage.
Background
The ideas in this post have been on my mind for the last few weeks, triggered in part by the question: What happens when an unstoppable force (AI) meets an immovable object (a university)?
I make several assumptions in the post, the most important of which is that we’re seeing an accelerating trend towards massive improvements in AI performance, and that this progress will continue. I’m also assuming that we’ll keep seeing improvements in accuracy, reductions in bias and hallucination, and an associated increase in trust in the outputs of AI. I’ll also concede here that some of the claims in this post, especially with respect to AI, are speculative and might be too optimistic. And finally, I grant that I could be completely wrong about universities, and that they may emerge as the true heroes in this story.
I started including some arguments to support my assumptions but then decided that the post was already much longer than I wanted it (“I have made this longer than usual because I have not had time to make it shorter”). I may write some follow up posts that address these issues.
And finally, I wrote this post in collaboration with Claude. In my next post, I’ll describe what that process looked like, along with a few thoughts on how it felt.
If you make it to the end of the post, I’d love to hear your thoughts.
Universities and their monopoly on expertise.
In this post, I’m going to argue that universities are poorly positioned to lead in the creative use of AI to support learning, teaching, and assessment. And that this is because of universities’ privileged position as the cultivators and validators of expertise, which is threatened by the rise of generative AI.
Let’s start with an overview of the characteristics of ‘expertise’.
- Having extensive knowledge and experience in a particular domain or skill, which typically takes years of study and practice to develop.
- Being able to apply knowledge to solve problems, make decisions, and achieve goals in that domain. Experts don’t just know facts, they know how to use them.
- Having intuitive understanding and ability to judge complex situations. Experts have a deep comprehension of their domains, and can make discerning judgements.
- Experts don’t just follow set procedures, they can creatively apply knowledge in new ways, adapting and innovating in changing contexts.
- Experts are often good at conveying complex concepts simply, communicating skill development and and knowledge effectively to others.
In other words, expertise goes beyond knowing things about the world and includes the skills and capability to use that information to act effectively. Experts apply their deep knowledge to solve real-world problems, make sound judgements, and have an intuitive understanding of complex situations that enables them to adapt and innovate. And in my opinion, generative AI is increasingly showing evidence of satisfying these criteria for demonstrating expertise across a wide range of knowledge domains.
From information abundance to expertise abundance.
If we accept that LLMs like Claude and GPT provide access to expertise, then we’re moving from a paradigm of information abundance, to expertise abundance. Until recently, LLMs could talk about physics but they couldn’t do physics. But recent developments in the services being layered on top of foundation models show that they are not just passive repositories of information, but interactive agents that produce real-world outputs with economic value and personal meaning. And it’s the personalised nature of learning through generative AI that universities are struggling with; the connection between learner and AI that poses a significant threat to the pedagogical model of universities.
For example, here are two simple prompts demonstrating the ability of LLMs to connect complex concepts to personally meaningful experience, something that teachers regard as one of their core skills:
- Prompt: “Explain the electric car industry to me, using golf as a metaphor.”
- Prompt: “Explain circle theorems from the UK GCSE assessment, using the metaphor of formula one racing.”
Both of these prompts use personally relevant context (i.e. golf and formula one racing) to make conceptual links to complex ideas. And both of these prompts gave fantastic responses that would absolutely help learners connect their personal experience and interest to challenging concepts (you can replace ‘golf’ and ‘forumla one racing’ with any number of different concepts). How many teachers could do this at all, let alone be available at a time and place that suited every student? A caveat that’s worth mentioning: I didn’t know enough about circle theorems to gauge the accuracy of the outputs, but the electric car metaphor was spot on. However, this is part of learning; not simply accepting the explanation, but experimenting with it, something which is not only possible, but fun, when using LLMs.
It follows that you don’t need to be an expert in any knowledge domain in order to engage with quite technical concepts within the domain, which means that soon we won’t need teachers to mediate between learners and expertise (I acknowledge that this is a big jump in reasoning, but I’m confident enough in the claim that I’ll leave it in). This is profound. In previous paradigms, learners need to be guided through their zone of proximal development by a more knowledgeable other, typically a teacher or textbook. And teachers’ expertise – and their textbooks – tend to be grouped together in schools or universities.
The centralisation of books and people within universities created a monopoly on access to information and the development of expertise, which universities have managed to maintain, despite the rise of the internet and other technologies. Universities are respected hubs and gatekeepers for expertise in many fields, and society looks to them for standards around knowledge and competencies for professionals, through the conferring of credentials that reinforce privilege and prestige around expertise. For centuries, universities have been the gatekeepers to the kind of learning that has status in society, with the authority to decide who gets to learn what, how, when, and where.
But in a world with universal, cheap access to expertise, why would anyone go to university?
Faced with this new paradigm of expertise-on-demand, universities initially focused their efforts on banning AI. When it became clear that AI was being integrated into everything and that it couldn’t be managed, they switched to acceptance with conditions. And for now, the conditions imposed seem to emphasise the need to isolate the work of the student.
Why is it so important to make sure that we separate out the work of the student? I wonder if it’s because a recognition of the student and AI as a single, self-reinforcing unit would be to acknowledge that AI has replaced the teacher. If expertise can be developed purely through interaction with generative AI, what role is there for teachers? And, if using AI results in competent action in the world, then what role is there for university as the validate of expertise? Faced with this question, universities seem to have reluctantly settled for accepting AI (only after they realised they had no choice), while also placing strong constraints on it’s use.
And central to these constraints is the attempt to artificially separate the contributions of the student from the contributions of the AI. But students and AI form an integrated knowledge ecosystem capable of developing expertise in the absence of a human teacher. As long as universities hold onto the belief that AI can, and must, be separated from the student, we’ll be stuck in a paradigm where universities try positioning themselves as the gatekeepers to information and expertise. But this ignores the reality that students and AI form a collective intelligence where the originator of ideas is impossible to isolate.
Where once only human experts could offer customised guidance, new AI services provide on-demand coaching and support, opening up new possibilities for accessing and applying expertise in a wide range of knowledge domains. The barriers to access that universities controlled are falling, as capable AI systems start rivalling human knowledge and practical wisdom, enabling anyone with an internet connection to leverage the vast knowledge and skills embedded in LLMs without the need for formal education or training. And I don’t see much evidence that universities are embracing this shift. I think they are invested in their own prestige and profit. They are too resistant to change, being slow and bureaucratic. They are too detached from the real needs and aspirations of learners and teachers, stubbornly focused teaching facts and theories, rather than skills and competencies.
I may be wrong, but I imagine that soon, anyone, anywhere, will have access to specialised information and expertise via LLMs, in a format and structure that’s personally meaningful, cheap, convenient, and in my opinion, pedagogically superior to what the average university programme can offer.
Haven’t we heard all this before?
Others have made similar arguments in the past, where certain technologies are poised to make universities irrelevant. And yet we still have universities. However, I think that AI poses a different level of disruption for a few key reasons:
- Rate of change. The rapid advances in AI, especially in the past 12 months, suggest the capabilities of generative AI are on an exponential curve unlike previous technologies. This rate of change makes adaptation more difficult, even in cases where stakeholders are motivated to change.
- Accessibility. Today’s AI is delivered through consumer apps and services, putting powerful capabilities like information synthesis and expertise into any individual’s hands. Previous technologies weren’t as ubiquitously available.
- Integrative abilities. AI can integrate and connect knowledge across disciplines and sources in ways that weren’t possible before. This removes the need for institutions to be the stores of knowledge and expertise.
- Customisability. AI can provide personalised and customisable expertise, undermining the one-to-many model of universities. Previous technologies weren’t as adaptable and couldn’t provide the level of personal service that’s possible with generative AI.
- Cost reductions. AI driven automation and virtualisation can massively reduce the cost of learning, challenging the high-price model of universities. Earlier technologies didn’t reduce costs as drastically, or were simply integrated into universities, further reinforcing their entrenched positions.
In other words, the scale, ubiquity, integrative nature, personalisation, and cost reduction made possible by AI’s rapid advance creates an existential threat to universities that’s unlike anything we’ve seen in the past.
I believe that universities are stuck in a paradigm that’s incompatible with the new capabilities of generative AI, making it difficult for them to lead in the use AI to support learning and teaching. Emerging generative AI systems are not just sources of information, but partners in creation, communication, and learning. They could be collaborators within the context of a new approach to learning, rather than competitors.
What might future education systems look like?
I’d like to see the following changes being implemented in universities, rather than the focus on more committees and regulation.
- AI-first. AI will be core to every decision, rather than something that’s forced into the existing model. We need more students and staff, using more AI, more of the time.
- Personal learning. Real, one-to-one, personal learning through AI-based systems of the kind that’s impossible for universities in their existing pedagogical model to deliver affordably at scale.
- Authentic learning. Approaches to learning, teaching, and assessment, where the creative, innovative, and thoughtful use of AI is an integral part of the process, and students and teachers use the tools and processes expected in practice.
- Project-based learning. Students work on practical real-world projects, using AI tools for design, analysis, and communication, and the focus is on applying knowledge rather than accumulating it.
- Rapid skills development. A move away from taking a long time to know more things, to a focus on getting much better at doing things, much more quickly than is acceptable now.
- Self-directed, AI-supported, curricula. Students direct their own learning pathway in collaboration with an AI mentor that recommends materials, learning experiences, and assesses progress.
The current university model could adapt to era of symbiotic intelligence and expertise-on-demand by moving away from assessing students atomistically, towards a full knowledge ecosystem that integrates AI. But I worry that an attachment to their history and privilege blinds them to the rising challenge from generative AI, inhibiting their ability to lead in the creative deployment of AI for learning, teaching, and assessment. The integrated student-AI challenges outdated paradigms where it is the university who gets to decide who can do what. Progress requires not only acceptance, but an active embrace of the symbiotic potential of learners and machines.
AI really is an unstoppable force, and the question I have is, when will universities realise that they are no longer an immovable object?
Comments
2 responses to “When an unstoppable force meets an immovable object”
[…] a previous post I mentioned in passing the idea that universities, if they want to remain relevant, will need to […]
[…] I published a post describing my concerns with how universities are responding to the new paradigm of […]