Michael Rowe

Trying to get better at getting better

Bing allows you to modulate the amount of ‘hallucination’ in your response

Last week I wrote about LLM hallucinations, and how this isn’t the problem that everyone thinks it is.

“I expect that soon we’ll see language models with features that allow us to modulate the output in some way. We may want to dial up creativity or serendipity, in which case we’ll see less overlap with our expectations around reality (i.e. more hallucination). Or we’ll dial up factfulness or realism, where we’ll get responses that map more explicitly onto what we see in the world (i.e. less hallucination).”

And then yesterday I saw this exact feature in the mobile version of Bing:

I’ve been saying for a while that language models aren’t a computing paradigm that gives you the answer; they give you an answer. It’s up to you to decide what you’re going to do with it.


Share this


Discover more from Michael Rowe

Subscribe to get the latest posts to your email.