Health professionals’ role in the banning of lethal autonomous weapons

This is a great episode from the Future of Life Institute, on the topic of banning lethal autonomous weapons. You may wonder, what on earth do lethal autonomous weapons have to do with health professionals? I wondered the same thing until I was reminded of the role that physios play in the rehabilitation of landmine victims. Landmines are less sophisticated than the next generation of lethal autonomous weapons, which means, in part, that they’re less able to distinguish between targets.

Weaponised drones, for example, will not only identify and engage targets based on age, gender, location, dress code, etc. but will also be able to reprioritise objectives independent of any human operator. In addition, unlike building a landmine, which (probably) requires some specialised training, weaponised drones will be produced en masse at low cost, fitted with commoditised hardware, will be programmable, and can be deployed at distance from the target. These are tools of mass destruction for the consumer market, enabling a few to create immense harm to many.

The video below gives an example of how 100s of drones can be coordinated by a single person. If these drones were fitted with explosives instead of flashing lights, you start to get a sense of how much damage they could do in a crowded space and how difficult it would be to stop them.

Given our commitment to do no harm, the global health community has a long history of successful advocacy against inhumane weapons, and the World and American Medical Associations have called for bans on nuclear, chemical and biological weapons. Now, recent advances in artificial intelligence have brought us to the brink of a new arms race in lethal autonomous weapons.

The American Medical Association has published a position statement on the role of artificial intelligence in augmenting the work of medical professionals but no professional organisation has yet to take a stance on banning autonomous weapons. It seems odd that we recognise the significance of AI for enhancing healthcare but not apparently, it’s potential for increasing human suffering. The medical and health professional community should not only advocate for the use of AI to improve health but also to ensure it is not used for autonomous decision-making in armed conflict.

More reading and resources at https://futureoflife.org/2019/04/02/fli-podcast-why-ban-lethal-autonomous-weapons/.

Algorithms have become so powerful we need a robust, Europe-wide response

Opaque algorithms in effect challenge the checks and balances essential for liberal democracies and market economies to function. As the EU builds a digital single market, it needs to ensure that market is anchored in democratic principles. Yet the software codes that determine which link shows up first, second, third and onwards, remain protected by intellectual property rights as “trade secrets”.

Source: Algorithms have become so powerful we need a robust, Europe-wide response

I thought that there were two interesting takeaways from this article. The first is the explicit concern around AI-based systems that are driven by commercial interests in the form of privately funded startups and massive multinational corporations. This is especially important when we consider that a significant proportion of AI research is aimed at improving algorithms that are used in the service of social media services that are, in fact, advertising platforms. As algorithms increasingly determine what we see in our newsfeeds, it becomes more important for everyone to understand that the primary objective of corporations is to increase shareholder profit and return on investment.

The second point is a more subtle question around whether we need AI systems that are informed by European values. Exactly what these values are can be debated but President Macron of France has described what he sees as a French response to North American and Chinese hegemony in this domain:

“And Europe has not exactly the same collective preferences as US or China. If we want to defend our way to deal with privacy, our collective preference for individual freedom versus technological progress, integrity of human beings and human DNA, if you want to manage your own choice of society, your choice of civilization, you have to be able to be an acting part of this AI revolution.”

Of course, this raises the question of what other values should be embedded in AI-based systems: African values? Human values? Patients values? I think it comes down to asking whose interests are being served by the algorithm? And then to ensure that we have enough diversity among those responsible for the design and implementation of AI in different contexts.

AMA Passes First Policy Recommendations on Augmented Intelligence

Combining AI methods and systems with an irreplaceable human clinician can advance the delivery of care in a way that outperforms what either can do alone. But we must forthrightly address challenges in the design, evaluation and implementation as this technology is increasingly integrated into physicians’ delivery of care to patients.

Source: AMA Passes First Policy Recommendations on Augmented Intelligence

The American Medical Association recently released their policy recommendations on the use of agumented intelligence systems in the clinical context. Briefly, the AMA states that it will:

  1. Help set priorities for health care AI.
  2. Identify opportunities to integrate the perspectives of clinicians into the development of health care AI.
  3. Promote the development of thoughtfully-designed, high quality, clinically validated health care AI.
  4. Encourage the education of all stakeholders into the promise and limitations of health care AI.
  5. Explore the legal implications for health care AI.

To me, this looks like a set of objectives or lines of inquiry for anyone interested in a research programme looking at the use of AI in the context of healthcare and health professions education.

Twitter Weekly Updates for 2011-08-01

  • Cities Are Immortal; Companies Die http://bit.ly/pEWOmx. Masie briefly mentioned this Kelly article (I think)  in his great presentation at #cityafrica # (link updated after the fact)
  • Historic medical manuscripts go online http://ow.ly/1v9v0b #
  • Omniscient Mobile Computing: What if Your Apps Knew Everything About Where You Are? http://ow.ly/1v9tkE. Reminded of Masie at #cityafrica #
  • Is RT a form of legitimate peripheral participation? Attended #tedxstellenbosch yesterday & did a lot of RT, wondering “did I padticipate”? #
  • @Sharoncolback not sure if it’s so simple, see @jeffjarvis who is very public re. personal stuff & who inspires many in similar situations #
  • Am I addicted to the internet? Maybe, but so what? http://ow.ly/1v8Cd0 #
  • Before iPhone war, Samsung sells 5M GS2′s in 85 days http://ow.ly/1v8BPZ. Got my samsung galaxy S2 last week and loving it so far #
  • Are there some things that shouldn’t be tweeted about? http://ow.ly/1v8BDT #
  • Feds Will Pay Doctors For Using Medical Records iPad App http://ow.ly/1v8AYl #
  • Electronic medical records get a boost from iPad, federal funding http://ow.ly/1v8AH5 #
  • The current impact agenda could consider the impact of inspirational teaching, not just research http://ow.ly/1v8An8 #
  • Mendeley 1.0 is here! http://ow.ly/1v8y0T #
  • Learning spaces haven’t changed much since structured education emerged centuries ago. #cityafrica providing inspiration for change #
  • @wesleylynch venue is packed, hard to find 5 seats next to each other, realm team always inviting 🙂 #
  • @wesleylynch re-designing cities to be integrated spaces for working, learning and living #
  • @wesleylynch not sitting with #realm team, but chatted a bit #
  • @hotdogcop “quality teaching” isn’t going to happen without policy change that affect salaries and other factors related to job satisfaction #
  • @hotdogcop interest groups aren’t confined to academia though…some academics seek radical change, institutional structure makes it hard #
  • @hotdogcop “academic” doesn’t have to mean “top-down” or “policy maker” #
  • @hotdogcop agreed, but we train the people who will be called on to implement change #
  • @hotdogcop Mokena has some great ideas re. the city & education. would be interesting have him talk to our academics #
  • RT @TEDxStellenbsch: The future city already exists <- no, the technology exists, it’ll take a few years to implement #cityafrica #
  • Mokena Makena the best speaker so far at #tedxstellenbosch #CityAfrica #
  • Classrooms are not inspiring #cityafrica #
  • How could learning spaces change if city / community / nature were more fully integrated? #cityafrica #
  • How would the world look if cities were planned to integrate nature? #cityafrica #
  • Cities and nature don’t have to be mutually exclusive #cityafrica #
  • @vivboz hi vivienne, I’m not sure what writing group you mean? #
  • If the world can’t see or hear you, are u relevant? Do gangs and violence allow young people to be feared, if not seen & heard? #cityafrica #
  • How do our living and working spaces change the way we think and what does that mean for how we live? #cityafrica #
  • At #tedxstellenbosch trying to better understand the relationship between city and community #cityafrica #
  • Using social media: practical and ethical guidance for doctors and medical students – The British Medical Association http://bit.ly/nHBIyj #
  • Sites for the QR-enabled Tourist http://bit.ly/qcMuan #

Twitter Weekly Updates for 2011-06-06