Allowing the proliferation of algorithmic surveillance as a substitution for human engagement and judgment helps pave the road to an ugly future where students spend more time interacting algorithms than instructors or each other. This is not a sound way to help writers develop robust and flexible writing practices.
Source: Another Terrible Idea from Turnitin | Just Visiting
First of all, I don’t use Turnitin and I don’t see any good reason for doing so. Combating the “cheating economy” doesn’t depend on us catching the students; it depends on creating the conditions in which students believe that cheating offers little real value relative to the pedagogical goals they are striving for. In general, I agree with a lot that the author is saying.
So, with that caveat out of the way, I wanted to comment on a few other pieces in the article that I think make significant assumptions and limit the utility of the piece, especially with respect to how algorithms (and software agents in particular) may be useful in the context of education.
- The use of the word “surveillance” in the quote above establishes the context for the rest of the paragraph. If the author had used “guidance” instead, the tone would be different. Same with “ugly”; remove that word and the meaning of the sentence is very different. It just makes it clear that the author has an agenda which clouds some of the other arguments about the use of algorithms in education.
- For example, the claim that it’s a bad thing for students to interact with an algorithm instead of another person is empirical; it can be tested. But it’s presented here in a way that implies that human interaction is simply better. Case closed. But what if we learned that algorithmic guidance (via AI-based agents/tutors) actually lead to better student outcomes than learning with/from other people? Would we insist on human interaction because it would make us feel better? Why not test our claims by doing the research before making judgements?
- The author uses a moral argument (at least, this was my take based on the language used) to position AI-based systems (specifically, algorithms) as being inherently immoral with respect to student learning. There’s a confusion between the corporate responsibility of a private company – like Turnitin – to make a profit, and the (possibly pedagogically sound) use of software agents to enhance some aspects of student learning.
Again, there’s some good advice around developing assignments and classroom conditions that make it less likely that students will want to cheat. This is undoubtedly a Good Thing. However, some of the claims about the utility of software agents are based on assumptions that aren’t necessarily supported by the evidence.