Michael Rowe

Trying to get better at getting better

If we don’t learn how to think with AI, we run the risk that we’ll let AI think for us. But what does this mean in practice? It’s so easy to simply accept an AI-generated response without questioning its approach, content, or conclusions; a bit like accepting a search engine’s first result without investigating further. The challenge isn’t with AI itself, but with how we choose to engage with it. Just as we encourage active reading rather than passive consumption of text, we need to develop practices for thinking with AI rather than deferring our thinking to it. This requires intentional engagement that treats AI as a collaborator in our thought process, not a replacement for it.

From passive consumption to active collaboration

Here are practical approaches to thinking with AI in health professions education (HPE), moving from basic to more advanced interactions. I’ve added some very simple prompt examples to each point. Please bear in mind that these examples aren’t meant to be suggestions for what you should do; only of many options for what you could do.

1. Use AI as a sounding board (basic level)

AI can serve as an always-available thought partner, but the key is to start with your own ideas. By starting with our own thoughts, we establish an active rather than passive relationship with the technology. In addition, those initial thoughts serve as the foundation for the AI-generated output, which means that we’re guiding the direction of the interaction.

  • Articulate your rough ideas first, then use AI to get feedback: “Tell me what you think” is actually a great way to get feedback on an early idea
  • Ask “What am I missing?” or “What are potential counterarguments?”
  • Compare the AI’s perspective with your own reasoning; one of my favourite prompts is “Tell me why I’m wrong”
  • HPE example: Develop your initial differential diagnosis for a complex case, then ask AI to review it. Notice where your clinical reasoning aligns or differs from the AI’s suggestions, and reflect on why those differences exist. Did you overlook something important, or did you incorporate contextual knowledge the AI lacks?

2. Practice collaborative refinement (intermediate level)

Rather than starting from scratch with AI, take your initial rough ideas that you’ve now expanded and use AI as a refinement tool. This preserves your agency while leveraging AI’s strengths as a collaborative partner.

  • Start with a rough draft of your own work: “Take on the role of a critical friend”
  • Use AI to help explore different ways to express or structure your ideas: “What would this look like to [a different profession] / [stakeholder]?”
  • Actively evaluate and choose which suggestions align with your intent: “I like THIS but not THAT”, “Try again but this time…”
  • HPE example: Draft a treatment plan based on your clinical judgement, then use AI to explore alternative approaches. Perhaps the AI suggests a treatment option you hadn’t considered, prompting you to research its appropriateness for your specific patient’s circumstances and comorbidities.

3. Use AI for metacognition (advanced level)

One of the most powerful ways to engage with AI is to use it as a mirror for examining your own thinking processes. How does your approach differ from the AI’s, and what can that teach you?

  • Ask AI to explain its reasoning process: “What was your rationale for…” or “Explain what you meant by…” (although remember that it’s not ‘thinking’ in the way it pretends to be)
  • Compare its approach to problem-solving with your own: “I was thinking THIS; let’s contrast that with your approach…”
  • Reflect on where and why your thinking differs from the AI’s suggestions (maybe even step away from the AI and think about what assumptions you’re bringing into the discussion)
  • HPE example: After working through a difficult diagnosis, ask the AI to analyse the same case and explain its reasoning. You might discover that you rely more heavily on recent clinical experiences while the AI emphasises statistical prevalence. This insight helps you recognise potential biases in your own decision-making process.

4. Develop systematic comparative analysis (expert level)

At the highest level of engagement, you can create frameworks for understanding when and how to integrate AI into your thinking process, and when to trust your own judgement over algorithmic suggestions.

  • Deliberately explore multiple approaches to a problem: “Suggest 5 other solutions I could explore, with advantages and disadvantages for each”
  • Analyse patterns in AI’s vs human reasoning processes: “You seem to suggest…more frequently, while I favour…”
  • Create frameworks for when to rely on each approach: “Let’s create a mental model for how I could use each your suggestions in different contexts; let’s start with you suggesting a few cognitive frameworks that seem relevant”
  • Document and learn from cases where AI and human judgement differ: keep an AI prompt diary or library where you can document the outcomes of your interactions with AI; this will help you develop a mental model for when to hand off responsibility to AI and when to take it back
  • HPE example: Over time, develop personal guidelines for when to integrate AI assistance in complex cases. You might discover that AI excels at identifying rare conditions that match specific symptom patterns, while your clinical intuition works better for patients with ambiguous presentations or complex psychosocial factors.

The challenge of thinking with AI

The key is maintaining active engagement, where the AI’s output becomes one of your inputs, rather than simply passively consuming that output. This requires effort and intention—it’s easier to accept an answer than to evaluate it critically, easier to outsource thinking than to enhance it with new tools.

But surely this is the same challenge we’ve always faced in education? We’ve long known that learning requires active participation, yet passive approaches to learning persist (often because this is what the system incentivises). In this context, AI is not just a challenge but an opportunity to reinforce the importance of critical thinking and active engagement in learning, not only for our students but for us as well.


Share this


Discover more from Michael Rowe

Subscribe to get the latest posts to your email.


Comments

One response to “Thinking with AI: A framework for active engagement”

  1. Giuseppe Cimadoro avatar
    Giuseppe Cimadoro

    Well said, fully agree and also contributed to spread this conceptual approach: https://www.timeshighereducation.com/campus/dont-just-chatgpt-turn-critical-interrogation