In this presentation I describe the use of Generative AI (GenAI) for research. I start by introducing GenAI systems like ChatGPT, Claude, Gemini, and Perplexity, which are next-word predictors capable of generating multimodal content. These systems are rapidly improving in competence and are different to traditional ‘search’ in important ways. For example, they generate responses from scratch based on a given prompt, without referring to an a priori model of the world.
The presentation highlights the similarities between GenAI and humans, including biases, hallucinations, lack of data provenance, and the capacity for both to good and bad in the world. I emphasise the importance of prompt development, particularly the contextual richness and structured prompts, for obtaining high-quality outputs from GenAI.
I go on to suggest that we treat GenAI like an expert or a person, asking for ideas rather than answers, and evaluating responses while assuming some level of error. In the presentation I explore various use cases for GenAI as a research assistant, such as literature review, idea generation, summarisation, data collection, writing, grant writing, and data analysis.
Finally, I highlight the ethical implications surrounding authorship, originality, and transparency when using GenAI for academic work. And I stress the importance of evaluating outputs critically, especially given the current state of the technology.