https://80000hours.org/podcast/episodes/nathan-labenz-ai-breakthroughs-controversies/
AI entrepreneur Nathan Labenz discusses the capabilities and limitations of AI, concerns about AI deception, breakthroughs in protein folding, safety comparison of self-driving cars, the potential of GPT for vision, the online conversation around AI safety, negative impact of Twitter on public discourse, contrasting views on AI, backfire of anti-regulation sentiment in tech industry, importance of constructive policy discussions on AI, concerns about face recognition technology, capabilities and concerns of autonomous AI drones, staying up to date with AI research.
This conversation covers:
- What AI now actually can and can’t do — across language and visual models, medicine, scientific research, self-driving cars, robotics, weapons — and what the next big breakthrough might be.
- Why most people, including most listeners, probably don’t know and can’t keep up with the new capabilities and wild results coming out across so many AI applications — and what we should do about that.
- How we need to learn to talk about AI more productively — particularly addressing the growing chasm between those concerned about AI risks and those who want to see progress accelerate, which may be counterproductive for everyone.
- Where Nathan agrees with and departs from the views of ‘AI scaling accelerationists.’
- The chances that anti-regulation rhetoric from some AI entrepreneurs backfires.
- How governments could (and already do) abuse AI tools like facial recognition, and how militarisation of AI is progressing.
- Preparing for coming societal impacts and potential disruption from AI.
- Practical ways that curious listeners can try to stay abreast of everything that’s going on.