Update: I remembered that I wrote a similar post almost a year ago, addressing a similar question. It’s called On the ethics concerns around requiring students to use AI.
I was asked recently about the possibility of disadvantaging some students when we assume that everyone has access to generative AI.
The intuitive response is that universities can’t design assessment tasks requiring the use of AI, unless all students have access to the same kind of AI. I don’t think it’s that simple. Differences in access to technology mean that some students will progress more quickly than others. We’ve always seen this, long before AI. During Covid we needed government and institutional support just to provide access to devices and the internet (and that wasn’t equal access either).
There are a hundred other places where we see inequity affecting students’ ability to learn. Some students have to work multiple jobs to support themselves, while others can rely on their parents. Some students have to travel for hours to get to campus, while others live in res. Some students can afford personal tutors and extra lessons, while this is a luxury that’s unaffordable for most. I don’t want to distract attention from the question but ‘disadvantage’ isn’t unique to the introduction of AI.
Having said that, I believe that we’re seeing more of a democratised approach to AI access than in many other areas of higher education. On Monday, OpenAI made GPT-4 available to everyone. For free. Claude 3 is now available across Europe. The free version of Google’s Gemini is very good. Perplexity gives you 5 prompts every day with their best model. And many universities have provided students with access to GPT-4 via Copilot as part of their institutional licensing with Microsoft. And you can access these tools on quite basic mobile phones.
Granted, you still need an internet connection for all of the foundation models (i.e. Claude, GPT, Copilot, Gemini) but we’re also seeing progress in the development of small language models that run on mobile devices, as well as very good open-source models, like LLaMA 3 from Meta. You can already run these ‘slightly-better-than-GPT-3.5’ models on a laptop (admittedly, it needs to be a decent laptop), which will address all sorts of issues alongside the challenge of needing an internet connection.
We’ve also never ever thought that we need to hold back the development of some students because they have disproportionate access to more resources. We’ve never tried to hamstring students with internet access at home, so that they don’t enjoy an advantage over students who only have access on campus. I think we have a responsibility to ensure that our students know how to use AI, and the best way to do that is to deeply integrate it into all aspects of learning, teaching, and assessment.
When the internet started becoming mainstream, I started requiring my students to use it, even though most didn’t have access from home. I didn’t believe I was doing them any favours by not teaching them how to use it effectively. Students with access at home would use it anyway, which meant that they’d progress more quickly and increase the divide. Universities think of ‘advantage’ mostly in terms of how it affects assessment outcomes. I tend to think of it in terms of how it affects students’ futures. When they interviewed for a job after university, I didn’t want my graduates to be disadvantaged by not knowing how the internet could be used to enhance practice. And so I insisted they use it.
Note that there are many other things that could affect this question, that I haven’t even considered in this post:
- Universities could provide all their students with access to the paid tier of cutting edge models but then we’d see students at rich institutions enjoy a relative advantage over those at less well-resourced institutions.
- Companies developing the best foundation models could provide free access to all students at a university, with the idea that they’d develop preferences for certain models that they’d take into the workplace where they’d be paying customers.
- Students at university could complete assessments in groups, averaging out the differences in access. In a more extreme version of this, you could have the entire cohort working to solve very difficult problems, where smaller teams within the cohort are responsible for working on pieces of the larger problem.
Anyway, there’s a lot more I could say in this post but I need to wrap it up. I don’t think that the ‘disadvantage’ question ever goes away, no matter how close the free tier of access is to the paid tier of AI services. I also don’t believe that access to the most cutting edge foundation models will be the differentiator (at least, not since the widespread availability of GPT-4 level models). But I do think we need to change assessment so that it looks very different to the current model.
In short, I don’t believe that they key issue is around access to AI, but rather how assessments in higher education accommodate technological disparities among students.