Michael Rowe

Trying to get better at getting better

I disagree. Beyond ’10 myths’ of AI in education

Giray, L. (2024). Ten Myths About Artificial Intelligence in Education. Higher Learning Research Communications.

I’m toying with the idea of calling out papers and posts I think are actively harmful to the ‘AI in healthcare and education’ literature. Not papers that are merely useless (for example, papers talking about all the ways AI might be used in any particular domain), but papers that, for whatever reason, confuse the conversation and actually restrict our understanding. To be clear, I’m not talking about engaging in a thorough and academic critical analysis; these are more likely to be hot-takes, which means I could get this wrong. Use your own common sense when reading what I think.

In the first post of this kind, I’m looking at a paper by Giray, who presents 10 (why is it always a convenient ’10’ in these kinds of articles?) so-called myths about AI in education.

Firstly, I didn’t read the paper; I skipped through the subheadings and skimmed each paragraph within them. For this kind of paper, that’s all you really need. I was disappointed because these kinds of papers can be important if they really do clarify and explain relevant ideas. This is not one of those papers, and here are some of the issues I have with it.

Evidence

There’s no evidence provided for any of these ‘myths’. While some are representative of common headlines you’d see in media e.g. AI will replace Teachers (although even then, most of the headlines I’ve seen talk about how AI will not replace teachers), most of these ‘myths’ feel like the author is making them up e.g. AI Will Replace the Need for Classrooms, AI Can Solve All Educational Problems, AI is Only for Science, Technology, Engineering, and Mathematics (STEM) Subjects, and AI is Always Objective. No-one spending any time in this domain thinks any of these things.

Also, the evidence provided for some of the counter-arguments is old. In most other contexts, I’d have no real issues with this. But I can’t take seriously any studies that are 5-10 years old, in support of arguments about AI capabilities today. That’s like saying “AI can’t generate video” and citing a study that used GPT-3.5; it was true when it was published, but it’s no longer true. A paper published in 2024, talking about AI capabilities, must cite the most up-to-date research. If you want to demonstrate a trend, then by all means, show how (symbolic) AI was brittle in 2017, and how (generative) AI remains brittle in 2024 (in some contexts). But that’s not what happened here.

Conceptual clarity

The author conflates very different features e.g. intelligence, consciousness, understanding, and so on. This kind of bait-and-switch tool is fairly common in the AI literature, where authors make claims in titles and headings, which turn out to be different to what they’re actually talking about.

One fairly common example is talking about ‘intelligence’ initially and then switching to ‘consciousness’ or ‘subjective experience’. We can all agree that AI is not conscious, but when you make the argument that it’s not conscious, and therefore it can’t be more intelligent than humans, you’ve lost me.

Alternative approaches to this kind of paper

IMO, if you want readers to take this kind of paper seriously, you could do the following:

  • Don’t call them myths. Myths are stories that have been around at least for decades (IMO). I would have called them ‘inaccurate ideas’, or ‘harmful claims’, or something else that more accurately describes them.
  • Define ‘AI’. In some examples, the author is clearly referring to generative AI. But then also uses 2017 and 2018 sources (Marcus and Tegmark, for example) that cannot be referring to generative AI. You can be generic in your use of ‘AI’, but you must have an operational definition of what that means.
  • Use examples of myths that most readers will recognise, either using studies (or even news articles) showing that these ideas are relatively common. The reader has to recognise the myth, not keep saying to themselves, “But no-one thinks that.”
  • Be very clear in your use of terminology. Consciousness is not intelligence. And avoid using words like ‘smart’, which is different to ‘intelligence’.
  • Avoid trying to get to 10 (or any arbitrary number). What was the cut-off to stop listing the ideas? Why 10 and not 5? It just feels like an arbitrary number.

Anyway, those are my thoughts. I could be wrong. I’m just tired of reading headlines that do little more than muddy the waters.


Share this


Discover more from Michael Rowe

Subscribe to get the latest posts to your email.