The logical endpoint is when AI-generated text becomes fully equivalent to typical/aggregate human-written text, with human-equivalent variation in the short-range phrase-structures, human-equivalent mixed-upness.
At that point, deliberate watermarking aside, AI-text detectors will have nothing systematic, reliable to key on.
Regardless of the mechanism, cheating is a social problem not a technological problem.
We shouldn’t be using technological solutions to try and solve social problems.
But we keep trying to do it anyway because paying for a new software feature (e.g. an AI detection service) is easier than actually trying to address the problem.
Instead of investing time and money into improving our chances of catching students breaking the rules, there are two steps universities should be taking:
- Change the rules and redefine what it means “to cheat” (remember that “using the internet” used to be cheating)
- Create learning environments where cheating is obviously the Wrong Choice (I’ve been thinking of this as the learning-alignment problem)