The fate of medicine in the time of AI

Source: Coiera, E. (2018). The fate of medicine in the time of AI.

The challenges of real-world implementation alone mean that we probably will see little change to clinical practice from AI in the next 5 years. We should certainly see changes in 10 years, and there is a real prospect of massive change in 20 years. [1]

This means that students entering health professions education today are likely to begin seeing the impact of AI in clinical practice when they graduate, and very likely to see significant changes 3-5 into their practice after graduating. Regardless of what progress is made between now and then, the students we’re teaching today will certainly be practising in a clinical environment that is very different from the one we prepared them for.

Coiera offers the following suggestions for how clinical education should probably be adapted:

  • Include a solid foundation in the statistical and psychological science of clinical reasoning.
  • Develop models of shared decision-making that include patients’ intelligent agents as partners in the process.
  • Clinicians will have a greater role to play in patient safety as new risks emerge e.g. automation bias.
  • Clinicians must be active participants in the development of new models of care that will become possible with AI.

We should also recognise that there is still a lot that is unknown with respect to where, when and how these disruptions will occur. Coiera suggests that the best guesses we can make about predicting the changes that are likely to happen should probably focus on those aspects of practice that are routine because this is where AI research will focus. As educators, we should work with clinicians to identify those areas of clinical practice that are most likely to be disrupted by AI-based technologies and then determine how education needs to change in response.

The prospect of AI is a Rorschach blot upon which many transfer their technological dreams or anxieties.

Finally, it’s also useful to consider that we will see in AI our own hopes and fears and that these biases are likely to inform the way we think about the potential benefits and dangers of AI. For this reason, we should include as diverse a group as possible in the discussion of how this technology should be integrated into practice.


[1] The quote from the article is based on Amara’s Law: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

How OpenAI is developing real solutions to the AI alignment problem

Growth in AI safety spending

Farquhar, S (2017). Changes in funding in the AI safety field.

Here’s a situation we all regularly confront: you want to answer a difficult question, but aren’t quite smart or informed enough to figure it out for yourself. The good news is you have access to experts who are smart enough to figure it out. The bad news is that they disagree.

If given plenty of time – and enough arguments, counterarguments and counter-counter-arguments between all the experts – should you eventually be able to figure out which is correct? What if one expert were deliberately trying to mislead you? And should the expert with the correct view just tell the whole truth, or will competition force them to throw in persuasive lies in order to have a chance of winning you over?

In other words: does ‘debate’, in principle, lead to truth?

Source: Wiblin, R. & Harris, K. (2018). Dr Paul Christiano on how OpenAI is developing real solutions to the ‘AI alignment problem’, and his vision of how humanity will progressively hand over decision-making to AI systems.

This is one of the most thoughtful conversations I’ve heard on the alignment problem in AI safety.  It wasn’t always easy to follow as both participants are operating at a very high level of understanding on the topic but it’s really rewarding. It’s definitely something I’ll listen to again. Topics that they covere include:

  • Why Paul expects AI to transform the world gradually rather than explosively and what that would look like.
  • Several concrete methods OpenAI is trying to develop to ensure AI systems do what we want even if they become more competent than us.
  • Why AI systems will probably be granted legal and property rights.
  • How an advanced AI that doesn’t share human goals could still have moral value.
  • Why machine learning might take over science research from humans before it can do most other tasks.
  • Which decade we should expect human labour to become obsolete, and how this should affect your savings plan.

Medical data: who owns it and what can be done to it?

…most states in the US do not have law to confer specific ownership of medical data to patients, while others put the rights on hospitals and physicians. Of all, only New Hampshire allows patients to legally own their medical records.

Source: Medical data: who owns it and what can be done to it?

A short article that raises some interesting questions. My understanding is that the data belongs to the patient and the media on which the data is stored belongs to the hospital. For example, I own the data generated about my body but the paper folder or computer hard drive belongs to the hospital. That means I can ask the hospital to photocopy my medical folder and give me the copy (or to email me an exported XML data file from whatever EHR system they use) but I can’t take the folder home when I’m discharged.

Things are going to get interesting when AI-based systems are being trained en masse using historical medical records where patients did not give consent for their data to be used for algorithmic training. I believe that the GDPR goes some way towards addressing this issue by stating that, “healthcare providers do not have to seek prior permission from patients to use their data, as long as they observe the professional secrecy act to not identify patients at the individual level”.

Rodney Brooks | Robotics & AI – Their Present & Future

Rodney Brooks was one of the leading developers of AI coding tools throughout the 80s and early 90s at MIT, where he spent a decade running one of the two largest and most prominent AI centres in the world. There are few who can match the breadth, depth, and duration of Rodney’s purview on the tech industry and this makes for a fascinating conversation.

In this podcast, Brooks diverges from fashionable narratives on the risk of super AI risk; the extent to which jobs will be imperiled by automation (he’s more worried about a labor shortage than a job shortage); and the timeline of the rise of self-driving cars (this being intersection of his two domains of foundational expertise: robotics and AI).

See also

Graduates are taking £9k courses to help beat AI interviews for City jobs

Via a webcam, the software remotely asks preliminary-round candidates 20 minutes of questions and brain-teasers, and records eye movements, breathing patterns and any nervous tics. Popular software such as HireVue also scans for emotion and expressions, such as blinks, smiles and frowns, by monitoring the face through the applicant’s front-facing smartphone camera or computer webcam.

Source: Blunden, M. (2018). Graduates are taking £9k courses to help beat AI interviews for City jobs.

Well, that’s just terrifying.

In Beta and sunsetting consumer Google+

Action 1: We are shutting down Google+ for consumers.

This review crystallized what we’ve known for a while: that while our engineering teams have put a lot of effort and dedication into building Google+ over the years, it has not achieved broad consumer or developer adoption, and has seen limited user interaction with apps. The consumer version of Google+ currently has low usage and engagement: 90 percent of Google+ user sessions are less than five seconds.

I don’t think it’s a surprise to anyone that Google+ wasn’t a big hit although I am surprised that they’ve taken the step to shut it down for consumers. And this is the problem with online communities in general; when the decision is made that they’re not cost-effective, they’re shut down regardless of the value they create for community members.

When Ben and I started In Beta last year we decided to use Google+ for our community announcements and have been pretty happy with what we’ve been able to achieve with it. The community has grown to almost 100 members and, while we don’t see much engagement or interaction, that’s not why we started using it. For us, it was to make announcements about planning for upcoming episodes and since we didn’t have a dedicated online space, it made sense to use something that already existed. Now that Google+ is being sunsetted we’ll need to figure out another place to set up the community.

 

Mozilla’s Common Voice project

Any high-quality speech-to-text engines require thousands of hours of voice data to train them, but publicly available voice data is very limited and the cost of commercial datasets is exorbitant. This prompted the question, how might we collect large quantities of voice data for Open Source machine learning?

Source: Branson, M. (2018). We’re intentionally designing open experiences, here’s why.

One of the big problems with the development of AI is that few organisations have the large, inclusive, diverse datasets that are necessary to reduce the inherent bias in algorithmic training. Mozilla’s Common Voice project is an attempt to create a large, multilanguage dataset of human voices with which to train natural language AI.

This is why we built Common Voice. To tell the story of voice data and how it relates to the need for diversity and inclusivity in speech technology. To better enable this storytelling, we created a robot that users on our website would “teach” to understand human speech by speaking to it through reading sentences.

I think that voice and audio is probably going to be the next compter-user interface so this is an important project to support if we want to make sure that Google, Facebook, Baidu and Tencent don’t have a monopoly on natural language processing. I see this project existing on the same continuum as OpenAI, which aims to ensure that “…AGI’s benefits are as widely and evenly distributed as possible.” Whatever you think about the possibility of AGI arriving anytime soon, I think it’s a good thing that people are working to ensure that the benefits of AI aren’t mediated by a few gatekeepers whose primary function is to increase shareholder value.

Most of the data used by large companies isn’t available to the majority of people. We think that stifles innovation. So we’ve launched Common Voice, a project to help make voice recognition open and accessible to everyone. Now you can donate your voice to help us build an open-source voice database that anyone can use to make innovative apps for devices and the web. Read a sentence to help machines learn how real people speak. Check the work of other contributors to improve the quality. It’s that simple!

The datasets are openly licensed and available for anyone to download and use, alongside other open language datasets that Mozilla links to on the page. This is an important project that everyone should consider contributing to. The interface is intuitive and makes it very easy to either submit your own voice or to validate the recordings that other people have made. Why not give it a go?

adapting to constant change

The human work of tomorrow will not be based on competencies best-suited for machines, because creative work that is continuously changing cannot be replicated by machines or code. While machine learning may be powerful, connected human learning is novel, innovative, and inspired.

Source: Jarche, H. (2018). adapting to constant change.

A good post on why learning how to learn is the only reasonable way to think about the future of work (and professional education). The upshot is that Communities of Practice are implicated in helping us adapt to working environments that are constantly changing, as will most likely continue to be the case.

However, I probably wouldn’t take the approach that it’s “us vs machines” because I don’t think that’s where we’re going to end up. I think it’s more likely that those who work closely with AI-based systems will outperform and replace those who don’t. In other words, we’re not competing with machines for our jobs; we’re competing with other people who use machines more effectively than we do.

Trying to be better than machines is not only difficult but our capitalist economy makes it pretty near impossible.

This is both true and a bit odd. No-one thinks they need to be able to do complex mathematics without calculators, and those who are better at using calculators can do more complex mathematics. Why is it such a big leap to realise that we don’t have to be better image classifiers than machines either? Let’s accept that diagnosis from CT will be performed by AI and focus on how that frees up physician time for other human- and patient-centred tasks. What will medical education look like when we’re teaching students that adapting while working with machines is the only way to stay relevant? I think that clinicians who graduate from medical schools who take this approach are more likely to be employed in the future.