Tag: bias
-
Clinical AI scribes and the redistribution of narrative power
Clinical AI scribes redistribute narrative control in medical consultations, creating unresolved tensions between equity and manipulation. The same mechanisms that might help marginalised patients push back against dismissive care could enable strategic gaming of medical records. This technology reveals that clinical documentation was never purely objective and has always been shaped by power.
-
ChatGPT shows hiring bias against people with disabilities
Language models exhibit hiring bias against people with disabilities, ranking resumes with disability-related achievements lower than those without. However, this bias mirrors existing societal prejudices in human hiring practices. While AI bias can be reduced through simple prompts, addressing human bias is a lot more challenging.
-
ChatGPT won’t be your doctor
Commercial frontier AI models like ChatGPT and Llama are known to hallucinate, but research proving this is redundant. Instead, attention should be on specialised medical AI systems like Google’s AMIE, which are showing impressive improvements in diagnostic accuracy. These purpose-built models, not general-purpose language models, are likely to be integrated into healthcare products.
-

BIP AI – AI for organisation and communication
In this workshop for the Blended Intensive Programme on AI in education and research, Antonio Lopes, Hugo Santos and I explore the transformative potential of integrating generative AI into personal and professional workflows for academics. Practical use cases demonstrate how AI can streamline tasks, from constructing lecture outlines to drafting emails. The workshop provides a…
-

BIP AI – AI in research: Opportunities and challenges
In this Blended Intensive Programme on AI, Guillem Jabardo and I explore the potential of generative AI to support all stages of the research process. However, while extremely powerful, these tools still have limitations, necessitating critical review. The ability of generative AI to augment human cognition represents a paradigm shift for academia.
-
Stop using AI detection services because they don’t work
The researchers conclude that the available detection tools are neither accurate nor reliable and have a main bias towards classifying the output as human-written rather than detecting AI-generated text. Furthermore, content obfuscation techniques significantly worsen the performance of tools. Weber-Wulff, et al. (2023). Testing of detection tools for AI-generated text. International Journal for Educational Integrity,…
-
My biased enthusiasm for generative AI, clearly articulated
I understand the serious ethical concerns many have raised about generative AI. These are important issues that deserve thoughtful debate. I believe that I know something about these concerns and I know the critical position I’m meant to take as an academic working at the intersection of education and technology. I’ve spent 15 years in…
-
Workshopping AI in higher education with students
I’m thinking about contributing to a workshop activity that involves students working on practical issues related to the implementation of AI-based services in higher education. Here are some ideas that I think might be worth exploring.
-
Generative AI is useful
in AIGenerative AI is useful, in the same way that electricity is useful. I use Claude for a wide range of tasks, every day. And today is the worst that Claude will ever be. Claude – and other generative AI services – will never ever again be as crap as it is today. Make no mistake,…
-
ChatGPT is – and isn’t – a good psychotherapist
First, a caveat. I know that ‘psychotherapy’ and ‘doctors treating depression’ aren’t the same thing, so this isn’t a direct comparison. However, it’s worth noting that few people are going to make the distinction. They’re going to see conflicting articles about ChatGPT being good – and bad – for ‘mental health related stuff’, and find…
-
Language model hallucination can still be accurate
I wanted to test if Claude AI could read and summarise an article when only given a URL. According to the response from the model, Claude can’t visit links. However, its summary of the article at the URL is spot on. Like, really good. So either Claude is lying and can visit links, or it’s…
-
Podcast: Clinicians’ ‘Number-One Wish’ for Artificial Intelligence
…we installed cheap depth sensors that can collect human behavior data on patients and clinicians without infringing on their privacy, because these are not photo grabs of people’s faces and identities. With that information, we can observe longitudinally, 24/7, whether proper care is being given to our patients and provide feedback in the health delivery…
-

Resource: AI Blindspot – A discovery process for spotting unconscious biases and structural inequalities in AI systems.
AI Blindspots are oversights in a team’s workflow that can generate harmful unintended consequences. They can arise from our unconscious biases or structural inequalities embedded in society. Blindspots can occur at any point before, during, or after the development of a model. The consequences of blindspots are challenging to foresee, but they tend to have…
-
Developing AI will have unintended consequences.
I’m reading the collection of responses to John Brockman’s 2015 Edge.org question: What to think about machines that think and wanted to share an idea highlighted by Peter Norvig in his short essay called “Design machines to deal with the world’s complexity”. Pessimists warn that we don’t know how to safely and reliably build large,…
-

Don’t blame biased algorithms for outcomes you don’t like.
“What algorithms are doing is giving you a look in the mirror. They reflect the inequalities of our society.” Sandra Wachter, in The Week in Tech: Algorithmic Bias Is Bad. Uncovering It Is Good. Cordliffe, J. (2019). The New York Times. We can start by agreeing that algorithms are biased. Unfortunately, this is where most…
-

10 recommendations for the ethical use of AI
In February the New York Times hosted the New Work Summit, a conference that explored the opportunities and risks associated with the emergence of artificial intelligence across all aspects of society. Attendees worked in groups to compile a list of recommendations for building and deploying ethical artificial intelligence, the results of which are listed below.…
-

MIT researchers show how to detect and address AI bias without loss in accuracy
The key…is often to get more data from underrepresented groups. For example…an AI model was twice as likely to label women as low-income and men as high-income. By increasing the representation of women in the dataset by a factor of 10, the number of inaccurate results was reduced by 40 percent. Source: MIT researchers show…
-

The AI Threat to Democracy
With the advent of strong reinforcement learning…, goal-oriented strategic AI is now very much a reality. The difference is one of categories, not increments. While a supervised learning system relies upon the metrics fed to it by humans to come up with meaningful predictions and lacks all capacity for goal-oriented strategic thinking, reinforcement learning systems…