Michael Rowe

Trying to get better at getting better

I’ve been thinking a lot about practical guidance for academics who are feeling disoriented by ChatGPT and other language models. Some recognise there’s no getting away from this, and are actively experimenting with the technology to try and understand it better. Even if they’re not embracing LLMs with a happy heart, they can see we have no choice but to engage with it.

Then there are those who believe that this must be banned immediately. I was reminded of this during a conversation last week with a colleague in Australia, which stopped me in my tracks. I’ve reached a point where the inevitability of this being everywhere, embedded into everything, makes it easy to forget that there are some who think it’s not only possible to ban the technology, but that it’s absolutely necessary.

And then there are the majority of people in the middle, who aren’t sure what to think but who nonetheless know they need to start making decisions about how to respond in their programmes. They may not be happy about the change, but they’re not resisting it. They’re just looking for guidance and suggestions for how to move forward.

All these groups need to be included in a conversation aimed at making sense of language models in the context of higher education, but the messaging across the sector feels ambiguous; Engage with LLMs, but be careful. Experiment, but not in some scenarios. Encourage students to use it, but not for certain tasks.

As part of our institutional working group on AI, I’ve been looking for concrete suggestions around some of the most common questions that come up.

Faculty presentations and Q&A sessions

I think that faculty in higher education institutions should have regular opportunities to come together for discussion and engagement, as the technology – and guidelines around using it – are changing all the time. Faculty need space to present their concerns, discuss options, and highlight challenges they’re facing on the ground. Two of the earlier contributions I made to our school were in the form of drop-in discussions, which I’ve since published in our new faculty development podcast.

Generative AI should not be cited as an author

These guidelines around authorship and AI were published in February, by the Committee on Publication Ethics (COPE), and I think they make sense:

AI bots should not be permitted as authors since they have no legal standing and so cannot hold copyright, be sued, or sign off on a piece of research as original.

So, don’t provide authorship credit to ChatGPT or any other language model.

Generative AI should not be cited as a source

Related to this, I also don’t see how it helps to require anyone to cite LLMs as a source, as if the information it provides is some kind of ground truth. There’s no point citing ChatGPT because there’s no direct line from its response to a source for the response. ChatGPT cannot explain where its responses come from, so what is the point of citing it? I think citing ChatGPT is a strategic move from people that they think will help them avoid accountability.

Accountability for use of the technology remains with the user

Speaking of which, I think that we need to insist that users of the technology are accountable for the outcomes of that use; we can’t have teachers, students, or administrators saying that they didn’t know the output might be flawed. From the UK Department of Education guidance on generative AI in higher education:

Whatever tools or resources are used in the production of administrative plans, policies or documents, the quality and content of the final document remains the professional responsibility of the person who produces it and the organisation they belong to.

I think this is a good principle: accountability remains with the user. And this general principle was tested recently, where a lawyer used ChatGPT to prepare documents for court, and was held accountable for the fake cases it generated.

(Teachers + AI) + (Students + AI) = Better outcomes

I think we should be focused on the outcomes we think matter for our graduates and the people they’ll interact with in society. In other words, what does a qualified nurse or OT need to do? When someone I care about is in the ICU, I want them to have good outcomes. Do I care if engineers use generative AI? Not really. I want bridges and buildings to not fall over.

So, instead of asking what students need to demonstrate in the higher education context (bearing in mind that our assessments tend to be informed by the culture of school, not the culture of work), we should ask what students need to do after they graduate. If graduates will use AI in their professional context (hint, they will), then students should be required to use AI. If we’re uncertain about the use of AI in a professional context, students should do both (i.e. use AI, and ‘not use’ AI) and then compare and contrast the different outcomes. I want our students to develop a critical understanding of when to use AI and when not to use it. What is AI good for, and what can’t it do yet? Otherwise, we’re training our students to develop skills that will soon be obsolete.

Concrete example of AI for an assessment task

This prompted me to think about a concrete example of an assessment task that explicitly incorporates the use of language models. This was also partly inspired by coming across this collection of ideas to use AI in education. I haven’t been through it yet, but I’ve seen it referenced in a couple of places, so thought it’d be good to pass on.

This open crowdsourced collection by #creativeHE presents a rich tapestry of our collective thinking in the first months of 2023 stitching together potential alternative uses and applications of Artificial Intelligence (AI) that could make a difference and create new learning, development, teaching and assessment opportunities.

Guidance from the sector

Increasingly, we’re going to see professional regulatory bodies and government organisations publishing position statements on the use of AI, for example, the UK Department of Education and their guidance on generative AI in education. Even though these position statements are helpful, I still them most of them working from the assumption that human beings retain their privileged position at the centre of whatever activity is being discussed. For example, that humans are the best teachers and that the use of AI is aimed at making them more efficient.

…technology (including generative AI), has the potential to reduce workload across the education sector, and free up teachers’ time, allowing them to focus on delivering excellent teaching.

However, what I’m increasingly thinking about, is that few people are genuinely concerned about the possibility that generative AI seems very well-positioned to take on more and more of our roles as teachers. What if the real potential of generative AI is that it can deliver excellent teaching? What if AI is a better teacher than me?

And this is maybe a good place to bring in my last point, which is that higher education institutions, in their current format, may not be structured to take advantage of the benefits of AI. In this conversation between Sam Harris and Martin Rees, Rees suggests that universities are probably not fit for purpose in their current incarnation. He thinks we need to provide smaller units of learning that people can engage with over the course of their lives, rather than the intense three year, residential, focused degrees. I tend to agree. We’ll soon see more remote and mobile learning, that is less text-based and more conversational in nature, smaller, long-term modules, and more integration with AI as teachers.

Universities aren’t designed to offer the true student-centred, customised and bespoke learning opportunities that AI can support, and I wonder how we’re going to respond.


Share this


Discover more from Michael Rowe

Subscribe to get the latest posts to your email.