This user guide for Claude is an excellent resource, not only for understanding how you can use Claude more effectively, but for understanding language models in general.
There’s an introduction, sections on prompt design and useful hacks to improve Claude’s responses, and an overview of the use cases you might consider for Claude. It also includes a glossary to help you with some of the more technical terms e.g. tokens.
Here is an example of the kind of useful background that the user guide includes:
- 🎭Claude is “playing a role” as a helpful assistant. It will often incorrectly report its own abilities, or claim to be “updating its memory”, when in fact it does not have any memory of prior conversations!
- ➗Claude will often make mistakes with complicated arithmetic and reasoning, and sometimes with more basic tasks. If given a long list of instructions it will often make mistakes when attempting to comply with all of them, but see Break complex tasks into subtasks and Prompt Chaining for some workarounds.
- 👻Claude still sometimes hallucinates or makes up information and details. It will sometimes fill in information from its memory that isn’t present in long documents it’s presented with when asked questions.
- 🌐Claude has read a lot on the internet, so it knows things about the real world… but it does not have internet access.
- ⏳Claude was trained on data that can be up to 2 years out of date.
- 📅Similarly, Claude does not know today’s date, nor does it know about current events.
- 🔨It cannot (yet!) take actions in the real world — but it can suggest actions to take.
- 📇It cannot (yet!) look things up — but it can suggest what to look up.
I appreciated the UI decision to surround examples of ‘good prompts’ with a green border, and ‘bad prompts’ with a red border. This is a subtle design choice that helps signpost context for the reader.