If your initial prompt isn’t specific enough, generative AI / Large Language Models can sometimes make assumptions about your level of understanding of the problem. In these cases, it can sometimes feel like you’ve hit a barrier because the response to your original question or instruction may not be helpful.
In these cases you need to use follow-up prompts to dig deeper.
For example:
- Explain the second paragraph in simpler language.
- What does X mean?
- Define Y.
- Relate your response to… .
- Why is it important to…?
- What else can you tell me about this?
- I’m not familiar with… . Please expand on each item in your list.
- What isn’t included in your list?
- Generate a list of questions that will help me to explore this further.
- Give me a list of MCQs but don’t tell me which response is correct.
When you get into the habit of following up with generative AI, you’ll quickly find yourself engaging in what feels like a natural conversation with an expert.
Of course, all the usual caveats apply: LLMs make stuff up. You can’t trust them. They’re biased. But none of this matters if you’re using them to poke and prod at ideas, rather than take them at face-value. Review the conversation with the system, pull out the bits that resonate or seem meaningful, and use that as the basis for ongoing exploration through more traditional means.