Note: I wrote this over the course of a busy day full of meetings. At the end of the day I just wanted to get it out there. I’m not sure that I’m fully on board with the arguments, but the thing that I love about blogs is that it doesn’t matter. So these are my first, rough, half-formed thoughts on this idea. Let me know what I got wrong.
Universities are currently engaged in the conversation around some student use of AI, and as part of that process there are inevitably concerns raised about the potential for the use of AI to disadvantage some students. The concern is that some students can afford access to better language models through paid tiers, which will give them an advantage over other students.
There seem to be two assumptions worth noting here:
- Universities assume that using AI for assessment tasks is what’s important.
- Universities automatically take the position that AI is about competitive advantage.
This seems bizarre.
I’ll address the second assumption, first.
The history of higher education is a history of privilege and of disadvantage, so it seems like an odd position to take in the context of AI. Why are we concerned about the ethics concerns around using AI, over other ethics concerns?
- We disadvantage students who don’t own laptops when we require them to submit typed assignments.
- We disadvantage students who live far away from campus when we require them to attend class in person.
- We disadvantage students without internet access when we require them to use Blackboard.
- We disadvantage poor students by making them work to pay for university fees, while students with wealthy parents don’t have this burden.
By all means, lets talk about disadvantage in higher education. But let’s have an honest discussion about what that really means in the broader context. There is inequity in higher education and in society and it’s likely that AI will increase that inequity in the same way that being literate increases the likelihood of success relative to someone who is illiterate. I don’t see a way around this, other than to try and limit the use of AI by everyone. Which obviously cannot work.
Another solution might be for a higher education institution to pay the license fee for everyone so that they can use the commercial version of AI, but all this does is shift the problem to a higher level. Instead of worrying about inequity between students within an institution, we’ll need to worry about inequity between institutions. And again, we have this already. And all the time we don’t enable students to use AI effectively, they’re losing out on the opportunities provided by this technology.
The second assumption that seems bound up in the ethics concern about requiring students to use AI is that some students will outperform others. But we set up the context where students are competing against each other. Grading and ranking students is one of the most important metrics that we care about in higher education, so we’re creating the conditions for competition in the first place. We could change our assessment practices so that the concept of gaining an advantage in assessment grades doesn’t make sense.
I remember having similar conversations when universities in South Africa were wrestling with the arrival of the internet, and were colleagues saying that we couldn’t require students to use the internet, because many (most?) of our students didn’t have internet access. I argued that not using the internet would put them at a greater disadvantage in the future because they’d be competing for posts with students who did use the internet. I thought that we had a responsibility to ensure that our graduates could use this technology to support their learning and their careers.
I want students to use AI to support their learning rather than to inform their assignments. I want them to use it to enhance the process of learning, rather than the produce better products that we use as proxies for learning. I don’t care if students with better language models can write better essays because I don’t care about essays (who writes essays?). I want all students to use AI to be better today than they were yesterday, for whatever definition of ‘better’ is relevant for them. We could assess the process instead of the product, and we could ask how the combination of student and AI solved a real-world problem.
I acknowledge that AI will enable some students to outperform others. But some students won a genetic or geographical or social lottery that enables them to outperform others anyway. They live closer, have more money, more books at home, better social networks, and so on, all of which give them a competitive advantage. AI will add to that. But I don’t think that’s the problem we need to focus on.
AI enables every student to accelerate their own learning, regardless of whether they use free or commercial language models. Yes, some students will have access to better support through AI but some students already enjoy an advantage over others. And since some people WILL be using AI, we have a responsibility to ensure that ALL students learn how to use it effectively.
I agree that there’s an ethics concern with requiring students to use AI. I believe that it’s unethical not to require students to use AI.