
This is a presentation that I gave at the opening of the University of the Western Cape research week (30 October 2023). Download the presentation.
Overview (generated by Claude)
This presentation explores the emerging role of AI, specifically large language models like ChatGPT, in academic research. It provides background on how these models work, for example, that they are next-word predictors trained on data cutoff in 2021, and that they hallucinate responses based on limited knowledge. Recent headlines show AI being used for literature reviews, idea generation, writing assistance, and more. However, the speaker cautions that language models have no grounding in reality or concepts of right/wrong.
The presentation gives examples of using a “research assistant” AI to help with tasks like literature reviews, suggesting research ideas, summarizing responses, analyzing data, and even grant writing. But it notes biases and limitations in understanding complex contextual knowledge. Services are emerging that specialize in research applications of AI.
In summary, while AI holds promise for amplifying human creativity and productivity in research, current systems remain unreliable and disconnected from truth and ethical considerations. The speaker concludes that generative AI has enormous potential for creative ideation and drafting, but its outputs require careful human evaluation.