“ChatGPT, what will be the impact of frontier language models on [PROFESSION]? Don’t sugar coat it.”
Models try to make me feel better when I ask certain questions; they always seem to be looking for a balance between ‘good’ and ‘bad’. I’ve found that adding “don’t sugar coat it” gives me responses that I think are probably more accurate, and also more worrying.
It’s worth noting that the model isn’t making a prediction about what will happen. So I’m not saying that this provost gives you The Truth. Only that it seems to give me a more realistic (pessimistic?) response.
Try the prompt above, inserting your profession of choice. I tried ‘actuarial science’, ‘physiotherapy’, ‘teaching’, and graphic design’.
All were sobering.
Note: Claude gave significantly less comprehensive responses to the same prompts. This is one of the few cases I’ve come across recently, where I preferred the output from ChatGPT .