Tag: reality
-
Language models don’t sometimes hallucinate. They always hallucinate.
By now, most people have come across the issue of language models like GPT hallucinating, where the model generates an output that’s unrelated to the prompt. Or, you may find that the generated responses increasingly diverge from the topic (as the error rate in the model accumulates over increasingly long sessions). When the response generated…