As I’ve been using LLMs more and more, I’ve found that I prefer the outputs generated by Claude, over any of the others. I still routinely compare the outputs of Claude to ChatGPT, Bard, and Perplexity), but in most cases, I up choosing the responses from Claude.
In the case of Bard, I’m not impressed with the responses. Bard is getting better, and it may be that the replacement of PaLM 2 with Gemini (the LLMs behind the Bard chatbot) is going to enable Bard to leap-frog everyone else but for now, I just don’t find it to be all that useful.
I appreciate Perplexity’s attempts to link it’s responses back to source material, and I’ve used it in cases where I know less about the topic, in the hope that I can have more trust in the outputs. And initially, I liked the chatbot’s suggestions for follow-up questions. However, I find that I end up following those questions down a rabbit-hole, rather than coming up with my own, and ending up somewhere far away from where I wanted to be.
I still think that ChatGPT is a great chatbot, and there are cases where I find it’s outputs to be useful. But for the most part, I prefer the language generated by Claude.
For the past few months, I’ve been using the same prompt in all three chatbots, and taking the best of all three, merging them, and then rewriting the text for myself.
But right now, if you told me I could only have access to one language model, it would be Claude.
Comments
One response to “I prefer the outputs of Claude, over other LLMs”
[…] This didn’t work because Claude doesn’t have access to up-to-date information. I’d thought it had access to more recent information but I was obviously wrong. Having said that, I was running the same process in parallel with BingChat in Edge, which apparently uses GPT-4 and has access to the internet. But the responses I got from BingChat were, at best, similar to Claude (over the course of this writing exercise, the outputs from BingChat were typically worse than Claude’s, reinforcing my preference for Claude). […]