One of my favourite use-cases for LLMs is to ask them to point out the weaknesses in my position. We all have blind spots, and when you’re working from within a paradigm it can be very difficult to see what those blind spots are.
The prompts I use look something like this:
I need to complete TASK X, and this is the approach I’m thinking of taking. You are an expect in TASK X, and I’d like you to point out the potential problems with my approach. If you were in my position, what approach would you use?
I don’t have to take on board the suggestions of the LLM, but there’s almost always something for me to think about and reconsider. Using language models to help point out your blind spots is a great use case.