Podcast: On the Long-term Importance of Current AI Policy

In this episode of the AI Alignment podcast from the Future of Life Institute, hosts discuss the following:

  • The importance of current AI policy work for long-term AI risk
  • Where we currently stand in the process of forming AI policy
  • Why persons worried about existential risk should care about present day AI policy
  • AI and the global community
  • The rationality and irrationality around AI race narratives

What stood out for me was the relative value of engaging with AI policy development as opposed to being solely focused on capability development. It’s easy to get caught up in the hype surrounding the features of cutting edge AI systems but it’s really important that we’re also spending time developing the legal and regulatory frameworks that will inform the implementation of those systems in the future. Here are some of the reasons for why policy development is a high value area of research right now:

  1. Experience gained on short-term AI policy issues is important to be considered a relevant advisor on long-term AI policy issues coming up in the future.
  2. There are very few people that care about Artificial General Intelligence (AGI) safety currently in government, politics or in policy communities.
  3. There are opportunities to influence current AI policy decisions in order to provide a fertile ground for future policy decisions or, better but rarer, to be directly shaping AGI safety policy today though evergreen texts. Future policy that is implemented is path dependent on current policy that we implement today. What we do now is precedent setting.
  4. There are opportunities today to develop a skillset useful for other policy issues and causes.
  5. Little resource is being spent on this avenue for impact, so the current return on investment is quite good.

While this podcast episode isn’t specifically about AI in healthcare, I’ve nonetheless added it to my public Zotero library of resources on AI in healthcare. The discussion goes into some depth on the importance of being involved in policy development, which is especially important in the domain of AGI and the potential of AI as an existential threat, but is also valuable to consider how we could be involved in policy development around AI in health systems.


Health professionals’ role in the banning of lethal autonomous weapons

This is a great episode from the Future of Life Institute, on the topic of banning lethal autonomous weapons. You may wonder, what on earth do lethal autonomous weapons have to do with health professionals? I wondered the same thing until I was reminded of the role that physios play in the rehabilitation of landmine victims. Landmines are less sophisticated than the next generation of lethal autonomous weapons, which means, in part, that they’re less able to distinguish between targets.

Weaponised drones, for example, will not only identify and engage targets based on age, gender, location, dress code, etc. but will also be able to reprioritise objectives independent of any human operator. In addition, unlike building a landmine, which (probably) requires some specialised training, weaponised drones will be produced en masse at low cost, fitted with commoditised hardware, will be programmable, and can be deployed at distance from the target. These are tools of mass destruction for the consumer market, enabling a few to create immense harm to many.

The video below gives an example of how 100s of drones can be coordinated by a single person. If these drones were fitted with explosives instead of flashing lights, you start to get a sense of how much damage they could do in a crowded space and how difficult it would be to stop them.

Given our commitment to do no harm, the global health community has a long history of successful advocacy against inhumane weapons, and the World and American Medical Associations have called for bans on nuclear, chemical and biological weapons. Now, recent advances in artificial intelligence have brought us to the brink of a new arms race in lethal autonomous weapons.

The American Medical Association has published a position statement on the role of artificial intelligence in augmenting the work of medical professionals but no professional organisation has yet to take a stance on banning autonomous weapons. It seems odd that we recognise the significance of AI for enhancing healthcare but not apparently, it’s potential for increasing human suffering. The medical and health professional community should not only advocate for the use of AI to improve health but also to ensure it is not used for autonomous decision-making in armed conflict.

More reading and resources at


Algorithms have become so powerful we need a robust, Europe-wide response

Opaque algorithms in effect challenge the checks and balances essential for liberal democracies and market economies to function. As the EU builds a digital single market, it needs to ensure that market is anchored in democratic principles. Yet the software codes that determine which link shows up first, second, third and onwards, remain protected by intellectual property rights as “trade secrets”.

Source: Algorithms have become so powerful we need a robust, Europe-wide response

I thought that there were two interesting takeaways from this article. The first is the explicit concern around AI-based systems that are driven by commercial interests in the form of privately funded startups and massive multinational corporations. This is especially important when we consider that a significant proportion of AI research is aimed at improving algorithms that are used in the service of social media services that are, in fact, advertising platforms. As algorithms increasingly determine what we see in our newsfeeds, it becomes more important for everyone to understand that the primary objective of corporations is to increase shareholder profit and return on investment.

The second point is a more subtle question around whether we need AI systems that are informed by European values. Exactly what these values are can be debated but President Macron of France has described what he sees as a French response to North American and Chinese hegemony in this domain:

“And Europe has not exactly the same collective preferences as US or China. If we want to defend our way to deal with privacy, our collective preference for individual freedom versus technological progress, integrity of human beings and human DNA, if you want to manage your own choice of society, your choice of civilization, you have to be able to be an acting part of this AI revolution.”

Of course, this raises the question of what other values should be embedded in AI-based systems: African values? Human values? Patients values? I think it comes down to asking whose interests are being served by the algorithm? And then to ensure that we have enough diversity among those responsible for the design and implementation of AI in different contexts.

AI clinical

AMA Passes First Policy Recommendations on Augmented Intelligence

Combining AI methods and systems with an irreplaceable human clinician can advance the delivery of care in a way that outperforms what either can do alone. But we must forthrightly address challenges in the design, evaluation and implementation as this technology is increasingly integrated into physicians’ delivery of care to patients.

Source: AMA Passes First Policy Recommendations on Augmented Intelligence

The American Medical Association recently released their policy recommendations on the use of agumented intelligence systems in the clinical context. Briefly, the AMA states that it will:

  1. Help set priorities for health care AI.
  2. Identify opportunities to integrate the perspectives of clinicians into the development of health care AI.
  3. Promote the development of thoughtfully-designed, high quality, clinically validated health care AI.
  4. Encourage the education of all stakeholders into the promise and limitations of health care AI.
  5. Explore the legal implications for health care AI.

To me, this looks like a set of objectives or lines of inquiry for anyone interested in a research programme looking at the use of AI in the context of healthcare and health professions education.

twitter feed

Twitter Weekly Updates for 2011-08-01

  • Cities Are Immortal; Companies Die Masie briefly mentioned this Kelly article (I think)  in his great presentation at #cityafrica # (link updated after the fact)
  • Historic medical manuscripts go online #
  • Omniscient Mobile Computing: What if Your Apps Knew Everything About Where You Are? Reminded of Masie at #cityafrica #
  • Is RT a form of legitimate peripheral participation? Attended #tedxstellenbosch yesterday & did a lot of RT, wondering “did I padticipate”? #
  • @Sharoncolback not sure if it’s so simple, see @jeffjarvis who is very public re. personal stuff & who inspires many in similar situations #
  • Am I addicted to the internet? Maybe, but so what? #
  • Before iPhone war, Samsung sells 5M GS2′s in 85 days Got my samsung galaxy S2 last week and loving it so far #
  • Are there some things that shouldn’t be tweeted about? #
  • Feds Will Pay Doctors For Using Medical Records iPad App #
  • Electronic medical records get a boost from iPad, federal funding #
  • The current impact agenda could consider the impact of inspirational teaching, not just research #
  • Mendeley 1.0 is here! #
  • Learning spaces haven’t changed much since structured education emerged centuries ago. #cityafrica providing inspiration for change #
  • @wesleylynch venue is packed, hard to find 5 seats next to each other, realm team always inviting 🙂 #
  • @wesleylynch re-designing cities to be integrated spaces for working, learning and living #
  • @wesleylynch not sitting with #realm team, but chatted a bit #
  • @hotdogcop “quality teaching” isn’t going to happen without policy change that affect salaries and other factors related to job satisfaction #
  • @hotdogcop interest groups aren’t confined to academia though…some academics seek radical change, institutional structure makes it hard #
  • @hotdogcop “academic” doesn’t have to mean “top-down” or “policy maker” #
  • @hotdogcop agreed, but we train the people who will be called on to implement change #
  • @hotdogcop Mokena has some great ideas re. the city & education. would be interesting have him talk to our academics #
  • RT @TEDxStellenbsch: The future city already exists <- no, the technology exists, it’ll take a few years to implement #cityafrica #
  • Mokena Makena the best speaker so far at #tedxstellenbosch #CityAfrica #
  • Classrooms are not inspiring #cityafrica #
  • How could learning spaces change if city / community / nature were more fully integrated? #cityafrica #
  • How would the world look if cities were planned to integrate nature? #cityafrica #
  • Cities and nature don’t have to be mutually exclusive #cityafrica #
  • @vivboz hi vivienne, I’m not sure what writing group you mean? #
  • If the world can’t see or hear you, are u relevant? Do gangs and violence allow young people to be feared, if not seen & heard? #cityafrica #
  • How do our living and working spaces change the way we think and what does that mean for how we live? #cityafrica #
  • At #tedxstellenbosch trying to better understand the relationship between city and community #cityafrica #
  • Using social media: practical and ethical guidance for doctors and medical students – The British Medical Association #
  • Sites for the QR-enabled Tourist #
twitter feed

Twitter Weekly Updates for 2011-06-06