In this episode of the AI Alignment podcast from the Future of Life Institute, hosts discuss the following:
- The importance of current AI policy work for long-term AI risk
- Where we currently stand in the process of forming AI policy
- Why persons worried about existential risk should care about present day AI policy
- AI and the global community
- The rationality and irrationality around AI race narratives
What stood out for me was the relative value of engaging with AI policy development as opposed to being solely focused on capability development. It’s easy to get caught up in the hype surrounding the features of cutting edge AI systems but it’s really important that we’re also spending time developing the legal and regulatory frameworks that will inform the implementation of those systems in the future. Here are some of the reasons for why policy development is a high value area of research right now:
- Experience gained on short-term AI policy issues is important to be considered a relevant advisor on long-term AI policy issues coming up in the future.
- There are very few people that care about Artificial General Intelligence (AGI) safety currently in government, politics or in policy communities.
- There are opportunities to influence current AI policy decisions in order to provide a fertile ground for future policy decisions or, better but rarer, to be directly shaping AGI safety policy today though evergreen texts. Future policy that is implemented is path dependent on current policy that we implement today. What we do now is precedent setting.
- There are opportunities today to develop a skillset useful for other policy issues and causes.
- Little resource is being spent on this avenue for impact, so the current return on investment is quite good.
While this podcast episode isn’t specifically about AI in healthcare, I’ve nonetheless added it to my public Zotero library of resources on AI in healthcare. The discussion goes into some depth on the importance of being involved in policy development, which is especially important in the domain of AGI and the potential of AI as an existential threat, but is also valuable to consider how we could be involved in policy development around AI in health systems.