Categories
AI

AI safety needs social scientists

…we need social scientists with experience in human cognition, behavior, and ethics, and in the careful design of rigorous experiments. Since the questions we need to answer are interdisciplinary and somewhat unusual relative to existing research, we believe many fields of social science are applicable, including experimental psychology, cognitive science, economics, political science, and social psychology, as well as adjacent fields like neuroscience and law.

Irving, G. & Askell, A. (2019). AI safety needs social scientists. OpenAI.

The development of AI and its implications across society is too important to leave to computer scientists, especially when it comes to AI safety and alignment. The uncertainty around how we think about human values makes it difficult to encode into software, since it involves human rationality, bias and emotion. But because the alignment of our values with AI systems is so fundamental to the ability of those systems to make good decisions, we need to have a wide variety of perspectives aimed at addressing the problem.

Link to the full paper on Distill.

By Michael Rowe

I'm a lecturer in the Department of Physiotherapy at the University of the Western Cape in Cape Town, South Africa. I'm interested in technology, education and healthcare and look for places where these things meet.