Tag: hallucination
-
ChatGPT won’t be your doctor
Commercial frontier AI models like ChatGPT and Llama are known to hallucinate, but research proving this is redundant. Instead, attention should be on specialised medical AI systems like Google’s AMIE, which are showing impressive improvements in diagnostic accuracy. These purpose-built models, not general-purpose language models, are likely to be integrated into healthcare products.
-
Weekly digest 39
A weekly collection of things I found interesting, thought-provoking, or inspiring. It’s almost always about higher education, mostly technology, and usually AI-related.
-
Weekly digest 27
A weekly collection of things I found interesting, thought-provoking, or inspiring. It’s almost always about higher education, mostly technology, and usually AI-related.
-
BIP AI – AI for organisation and communication
In this workshop for the Blended Intensive Programme on AI in education and research, Antonio Lopes, Hugo Santos and I explore the transformative potential of integrating generative AI into personal and professional workflows for academics. Practical use cases demonstrate how AI can streamline tasks, from constructing lecture outlines to drafting emails. The workshop provides a…
-
BIP AI – AI in research: Opportunities and challenges
In this Blended Intensive Programme on AI, Guillem Jabardo and I explore the potential of generative AI to support all stages of the research process. However, while extremely powerful, these tools still have limitations, necessitating critical review. The ability of generative AI to augment human cognition represents a paradigm shift for academia.
-
Hallucinations aren’t a problem to be fixed
A few months ago I wrote a post explaining that language models don’t sometimes hallucinate; they always hallucinate. “…every single response is a creative endeavour. It just happens to be the case that most of the responses we get map onto our expectations; we compare the response against our (human) models of reality.” So I…
-
Generative AI is useful
in AIGenerative AI is useful, in the same way that electricity is useful. I use Claude for a wide range of tasks, every day. And today is the worst that Claude will ever be. Claude – and other generative AI services – will never ever again be as crap as it is today. Make no mistake,…
-
Link: New technique makes AI hallucinations wake up and face reality
https://thenextweb.com/news/ai-hallucinations-solution-iris-ai Here is a condensed paragraph summary of the article, generated by Claude (having read the original article, I can attest that it’s a decent summary): New techniques developed by researchers at Iris.ai show promise for reducing AI hallucinations, the problematic tendency for systems like chatbots to generate false information. Their approach validates the factual…
-
Bing allows you to modulate the amount of ‘hallucination’ in your response
Last week I wrote about LLM hallucinations, and how this isn’t the problem that everyone thinks it is. “I expect that soon we’ll see language models with features that allow us to modulate the output in some way. We may want to dial up creativity or serendipity, in which case we’ll see less overlap with…
-
Language models don’t sometimes hallucinate. They always hallucinate.
By now, most people have come across the issue of language models like GPT hallucinating, where the model generates an output that’s unrelated to the prompt. Or, you may find that the generated responses increasingly diverge from the topic (as the error rate in the model accumulates over increasingly long sessions). When the response generated…