AI bias and block boxes
I’m getting tired of reading about bias and the black box nature of AI (even though I’ve been guilty of this myself).
Human brains are biased. Human brains are block boxes. We still trust the outputs of human brains.
I’m not suggesting that we just start trusting algorithms. But I do think that we’ll move past this concern, without first needing to solve the bias and black box problems (although we probably will solve them).
One reason we trust the outputs of human brains is because we trust things that look and behave as we do. And because we recognise that the outputs of those other brains, over time, map onto our own perceptions of reality, we come to trust them. Other human brains interact with the world in ways that look like how we interact with the world. When you tell me that you think the sun will rise in a certain position tomorrow morning, and that comes to pass, I have another data point convincing me that your outputs are useful.
The same will be true of AI systems; their outputs will map onto reality in much the same way that we perceive reality (obviously, because they’re trained on the outputs of human brains, so they’ll mimic human brains). And over time, we’ll come to trust AI outputs, even though they’re biased and black boxes.
We won’t need to know how they generate those outputs, in the same way that I don’t need to know how you generate your outputs. All that I need to know is that your outputs are useful.
I enjoyed some of the subversive perspectives in the responses to this tweet. Here are a few:
so HigherEd is going to use AI to recruit students and read admissions apps/essays while simultaneously policing/punishing students for using AI. – Karen Costa
One of the things noted in session on plagiarism today was that many plagiarism policies, statements and resources look like they’ve been plagiarised. – Martin Compton
…and my favourite…
When I show faculty how generative AI can help them be more creative and make them more productive, they’re always amazed and super excited. Then when I say something like “It’s going to make your students more productive, too,” they get super grumpy. – David Wiley
Open source tools
I installed Espanso text expander but haven’t yet had a chance to set up any of the snippets. One of the use cases I anticipate is giving feedback to students, where up to half of the comments are generic.
The other tool I played around with is Etherpad collaborative text editor. I don’t really have a specific objective with this, other than to stay informed of the options that are available. I’m not naive enough to think that I can convince my collaborators to move to Etherpad (rather than Google Docs, for example), but maybe a reader of this blog will find it useful.
Learning design
A visually structured approach to learning design to think through and support your students’ learning. Based on the 6 learning types in Laurillard’s Conversational Framework. I haven’t looked into this in much detail, but I’ve used Laurillard’s work a lot in the past, and so I made a note to explore this in more detail. I’m teaching a new module at the moment, and think this might be a good way to test it before I run it again next year.
Generative AI generating ideas based on semantic understanding
There’s a narrative around generative AI that goes something like, “Generative AI will never generate anything new…it can only parrot back to us the average of what we already know.” It has no understanding of what it’s generating, which means it can never create real insight. This has become known as the stochastic parrot narrative.
I disagree with this but maybe it’s just me being pedantic. An insight that’s obvious to a member of another community (philosophers, for example) may not be obvious to me. Generative AI that gives me insight into philosophy, and helps me to generate something new, is surely part of the process of creating something new.
Anyway, it’s interesting to see work being done on semantic learning in language models, which may move us past this point of view. And related to to this, is the Allen Institute suggesting that language models could be used to generate genuinely new scientific ideas. It will be interesting to see what new hypotheses this could generate for testing.
This touches on a trend I’ve noticed recently. Many of the major criticisms of language models are being addressed, and progress is being made very quickly. If your strategy for dealing with generative AI is to hole up in the pockets of competence where AI struggles, I think your solace will be short-lived.