Health professions education has long embraced social constructivist theories as a fundamental approach to learning. However, the emerging capabilities of AI challenge us to reconsider our assumptions about what makes social learning effective. Social constructivism and AI are often positioned as opposing forces, with many educators claiming that the human teacher-student relationship is irreplaceable. This assertion is typically presented as a self-evident truth, but I’m no longer convinced that it’s true.
While social interaction remains vital to health professions education (HPE), we should question whether these interactions must be exclusively human-to-human. What if the qualities we value in human teachers could be incorporated into AI systems? What if AI could not only supplement, but potentially enhance our social learning environments?
Social constructivism in health professions education
Social constructivism is the idea that knowledge is built through interaction with others. In healthcare education, this happens when students join communities of practice, absorb tacit knowledge from experienced clinicians, and participate in collaborative problem-solving.
This social dimension of learning is especially important in healthcare, where knowledge isn’t just about facts but about context, values, and human connection. We see social constructivism at work in bedside teaching, case discussions, and clinical debriefs where meaning is created through dialogue and shared experience.
Traditionally, we’ve assumed that this social construction requires human teachers. After all, who better to initiate students into the culture of healthcare than those who already belong to it? But I think it’s time that we question this assumption.
Human characteristics that support learning
We often romanticise the characteristics of human teachers, even arguing that our human imperfections are essential to learning, making it meaningful and authentic.
In HPE, these valued “flaws” include:
- Sharing personal clinical failures and lessons learned, which helps students understand that expertise develops through mistakes
- Emotional responses to ethically complex situations, which model how to balance clinical objectivity with human compassion
- Adaptability based on student needs, where teachers adjust explanations or examples when they sense confusion
- Cultural and contextual knowledge that helps students navigate the social complexities of healthcare
But are these qualities inherently “human,” or are they simply characteristics we value? Could they be recreated, or even enhanced, in AI systems?
Engineering the “human” into AI
What if we could intentionally design these valued characteristics into AI systems? What if an AI tutor could:
- Clinical error narratives. Walking students through complex cases where initial assumptions proved wrong. These wouldn’t be simplistic cautionary tales, but nuanced examples showing how uncertainty affects clinical reasoning.
- Model ethical reasoning. AI could present dilemmas from multiple perspectives—therapist, patient, family member, financial officer—modelling the process of ethical deliberation, including doubt, and acknowledging cultural factors in decision-making.
- Detect confusion. Modern AI systems can already detect confusion through natural language analysis and adjust explanation styles based on learning patterns. They could potentially provide appropriate emotional support during challenging topics.
- Diversity of clinical contexts and cultural perspectives. Acknowledge cultural biases in medical knowledge, and adapt communication styles to different cultural contexts.
None of these capabilities requires human consciousness – they require sophisticated pattern recognition, adaptability, and rich knowledge sources, all of which are rapidly evolving in AI systems.
Reimagining social constructivism with AI
The future may not be about AI replacing humans but rather an intentional integration. AI could provide unlimited practice opportunities, expose students to a wider range of clinical presentations, or facilitate peer discussions at scale, providing access to learning opportunities that we cannot currently provide in the existing system.
Yes, we might be losing something in this transition. There’s an authenticity to learning from someone who has personally experienced clinical practice. There’s a spontaneity to human interaction that AI may not be able to fully replicate. But we should ask: Are these losses inevitable, or are we romanticising human interaction?
Exceptional human teachers are relatively rare; we’re all limited by time, energy, and personal experience. AI-based tutors won’t need to be better than the best human teachers; they only need to be cheaper than the average human teacher in order to make a significant difference to HPE on a global scale.
Could we use AI to create new forms of meaningful, social learning interactions we haven’t imagined yet?
A more inclusive definition of “social learning”
Instead of seeing AI as diminishing social learning, we can expand our definition of “social” to include interactions with AI-based systems. We already form meaningful relationships with fictional characters in books and films. Is it so far-fetched that we might develop valuable learning relationships with increasingly sophisticated AI?
We should question the idea that human-only teaching is some kind of gold standard, and explore the wide range of AI-supported learning environments that could preserve the best aspects of human teaching while transcending our limitations. Combining social constructivism and AI might be a reasonable way to integrate theory we feel confident with, and this new technology that seems to be driving an unproductive – but comforting – narrative in health professions education.
Maybe the question we should be asking isn’t whether AI can replace human teachers, but how we can collaborate with AI to create better learning experiences. The future of health professions education likely isn’t either AI or human teaching, but a thoughtful integration of both—leveraging technology to extend human capacity rather than replace it.
As AI capabilities continue to advance, the distinction between human and AI teaching may become increasingly blurred. Rather than resisting this evolution, we should be actively shaping it to create more effective, accessible, and equitable health professions education.