The challenges of real-world implementation alone mean that we probably will see little change to clinical practice from AI in the next 5 years. We should certainly see changes in 10 years, and there is a real prospect of massive change in 20 years. 
This means that students entering health professions education today are likely to begin seeing the impact of AI in clinical practice when they graduate, and very likely to see significant changes 3-5 into their practice after graduating. Regardless of what progress is made between now and then, the students we’re teaching today will certainly be practising in a clinical environment that is very different from the one we prepared them for.
Coiera offers the following suggestions for how clinical education should probably be adapted:
Include a solid foundation in the statistical and psychological science of clinical reasoning.
Develop models of shared decision-making that include patients’ intelligent agents as partners in the process.
Clinicians will have a greater role to play in patient safety as new risks emerge e.g. automation bias.
Clinicians must be active participants in the development of new models of care that will become possible with AI.
We should also recognise that there is still a lot that is unknown with respect to where, when and how these disruptions will occur. Coiera suggests that the best guesses we can make about predicting the changes that are likely to happen should probably focus on those aspects of practice that are routine because this is where AI research will focus. As educators, we should work with clinicians to identify those areas of clinical practice that are most likely to be disrupted by AI-based technologies and then determine how education needs to change in response.
The prospect of AI is a Rorschach blot upon which many transfer their technological dreams or anxieties.
Finally, it’s also useful to consider that we will see in AI our own hopes and fears and that these biases are likely to inform the way we think about the potential benefits and dangers of AI. For this reason, we should include as diverse a group as possible in the discussion of how this technology should be integrated into practice.
 The quote from the article is based on Amara’s Law: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
This overview of the changes in capabilities of the Atlas humanoid robot from Boston Dynamics is both fascinating and bit unsettling. In 5 years Atlas has gone from struggling to stand on one leg, to walking on uneven surfaces, to running on uneven surfaces, to doing backflips and now, in October 2018, to bounding up a staggered series of wooden platforms. It’s worth noting that very few human beings would be able to accomplish this last feat.
According to Boston Dynamics, Atlas’ software uses all parts of the body to generate the necessary force to propel the robot up the platforms. The most impressive part of the last demo is the fact that “...Atlas uses computer vision and visible markers on the platforms to decide when and how to shift it weight. So, it’s not just executing a program, it’s making it up as it goes along.” In other words, Atlas is making real-time decisions about how to move, based on what it sees in front of it. No-one has told it what to do.
The profound implication of this is that these things are only ever going to get better, and the rate of change is going to increase. Now that they’ve solved “balance”, “walking”, “running”, and “jumping”, what will Boston Dynamics turn to next? Once Atlas has achieved parity with human performance it’s only a matter of time before it’s superhuman in every physical ability we care about.
A nice collection of quotes in a slideshow, taken from a new report by the National Academies of Sciences, Engineering, and Medicine, that highlights the dynamic process of learning throughout the lifespan.
Here’s a situation we all regularly confront: you want to answer a
difficult question, but aren’t quite smart or informed enough to figure
it out for yourself. The good news is you have access to experts who are smart enough to figure it out. The bad news is that they disagree.
given plenty of time – and enough arguments, counterarguments and
counter-counter-arguments between all the experts – should you
eventually be able to figure out which is correct? What if one expert
were deliberately trying to mislead you? And should the expert with the
correct view just tell the whole truth, or will competition force them
to throw in persuasive lies in order to have a chance of winning you
In other words: does ‘debate’, in principle, lead to truth?
This is one of the most thoughtful conversations I’ve heard on the alignment problem in AI safety. It wasn’t always easy to follow as both participants are operating at a very high level of understanding on the topic but it’s really rewarding. It’s definitely something I’ll listen to again. Topics that they covere include:
Why Paul expects AI to transform the world gradually rather than explosively and what that would look like.
Several concrete methods OpenAI is trying to develop to ensure AI systems do what we want even if they become more competent than us.
Why AI systems will probably be granted legal and property rights.
How an advanced AI that doesn’t share human goals could still have moral value.
Why machine learning might take over science research from humans before it can do most other tasks.
Which decade we should expect human labour to become obsolete, and how this should affect your savings plan.
…most states in the US do not have law to confer specific ownership of medical data to patients, while others put the rights on hospitals and physicians. Of all, only New Hampshire allows patients to legally own their medical records.
A short article that raises some interesting questions. My understanding is that the data belongs to the patient and the media on which the data is stored belongs to the hospital. For example, I own the data generated about my body but the paper folder or computer hard drive belongs to the hospital. That means I can ask the hospital to photocopy my medical folder and give me the copy (or to email me an exported XML data file from whatever EHR system they use) but I can’t take the folder home when I’m discharged.
Things are going to get interesting when AI-based systems are being trained en masse using historical medical records where patients did not give consent for their data to be used for algorithmic training. I believe that the GDPR goes some way towards addressing this issue by stating that, “healthcare providers do not have to seek prior permission from patients to use their data, as long as they observe the professional secrecy act to not identify patients at the individual level”.
Rodney Brooks was one of the leading developers of AI coding tools throughout the 80s and early 90s at MIT, where he spent a decade running one of the two largest and most prominent AI centres in the world. There are few who can match the breadth, depth, and duration of Rodney’s purview on the tech industry and this makes for a fascinating conversation.
In this podcast, Brooks diverges from fashionable narratives on the risk of super AI risk; the extent to which jobs will be imperiled by automation (he’s more worried about a labor shortage than a job shortage); and the timeline of the rise of self-driving cars (this being intersection of his two domains of foundational expertise: robotics and AI).
Via a webcam, the software remotely asks preliminary-round candidates 20 minutes of questions and brain-teasers, and records eye movements, breathing patterns and any nervous tics. Popular software such as HireVue also scans for emotion and expressions, such as blinks, smiles and frowns, by monitoring the face through the applicant’s front-facing smartphone camera or computer webcam.
Action 1: We are shutting down Google+ for consumers.
This review crystallized what we’ve known for a while: that while our engineering teams have put a lot of effort and dedication into building Google+ over the years, it has not achieved broad consumer or developer adoption, and has seen limited user interaction with apps. The consumer version of Google+ currently has low usage and engagement: 90 percent of Google+ user sessions are less than five seconds.
I don’t think it’s a surprise to anyone that Google+ wasn’t a big hit although I am surprised that they’ve taken the step to shut it down for consumers. And this is the problem with online communities in general; when the decision is made that they’re not cost-effective, they’re shut down regardless of the value they create for community members.
When Ben and I started In Beta last year we decided to use Google+ for our community announcements and have been pretty happy with what we’ve been able to achieve with it. The community has grown to almost 100 members and, while we don’t see much engagement or interaction, that’s not why we started using it. For us, it was to make announcements about planning for upcoming episodes and since we didn’t have a dedicated online space, it made sense to use something that already existed. Now that Google+ is being sunsetted we’ll need to figure out another place to set up the community.
Any high-quality speech-to-text engines require thousands of hours of voice data to train them, but publicly available voice data is very limited and the cost of commercial datasets is exorbitant. This prompted the question, how might we collect large quantities of voice data for Open Source machine learning?
One of the big problems with the development of AI is that few organisations have the large, inclusive, diverse datasets that are necessary to reduce the inherent bias in algorithmic training. Mozilla’s Common Voice project is an attempt to create a large, multilanguage dataset of human voices with which to train natural language AI.
This is why we built Common Voice. To tell the story of voice data and how it relates to the need for diversity and inclusivity in speech technology. To better enable this storytelling, we created a robot that users on our website would “teach” to understand human speech by speaking to it through reading sentences.
I think that voice and audio is probably going to be the next compter-user interface so this is an important project to support if we want to make sure that Google, Facebook, Baidu and Tencent don’t have a monopoly on natural language processing. I see this project existing on the same continuum as OpenAI, which aims to ensure that “…AGI’s benefits are as widely and evenly distributed as possible.” Whatever you think about the possibility of AGI arriving anytime soon, I think it’s a good thing that people are working to ensure that the benefits of AI aren’t mediated by a few gatekeepers whose primary function is to increase shareholder value.
Most of the data used by large companies isn’t available to the majority of people. We think that stifles innovation. So we’ve launched Common Voice, a project to help make voice recognition open and accessible to everyone. Now you can donate your voice to help us build an open-source voice database that anyone can use to make innovative apps for devices and the web. Read a sentence to help machines learn how real people speak. Check the work of other contributors to improve the quality. It’s that simple!
The datasets are openly licensed and available for anyone to download and use, alongside other open language datasets that Mozilla links to on the page. This is an important project that everyone should consider contributing to. The interface is intuitive and makes it very easy to either submit your own voice or to validate the recordings that other people have made. Why not give it a go?
The human work of tomorrow will not be based on competencies best-suited for machines, because creative work that is continuously changing cannot be replicated by machines or code. While machine learning may be powerful, connected human learning is novel, innovative, and inspired.
A good post on why learning how to learn is the only reasonable way to think about the future of work (and professional education). The upshot is that Communities of Practice are implicated in helping us adapt to working environments that are constantly changing, as will most likely continue to be the case.
However, I probably wouldn’t take the approach that it’s “us vs machines” because I don’t think that’s where we’re going to end up. I think it’s more likely that those who work closely with AI-based systems will outperform and replace those who don’t. In other words, we’re not competing with machines for our jobs; we’re competing with other people who use machines more effectively than we do.
Trying to be better than machines is not only difficult but our capitalist economy makes it pretty near impossible.
This is both true and a bit odd. No-one thinks they need to be able to do complex mathematics without calculators, and those who are better at using calculators can do more complex mathematics. Why is it such a big leap to realise that we don’t have to be better image classifiers than machines either? Let’s accept that diagnosis from CT will be performed by AI and focus on how that frees up physician time for other human- and patient-centred tasks. What will medical education look like when we’re teaching students that adapting while working with machines is the only way to stay relevant? I think that clinicians who graduate from medical schools who take this approach are more likely to be employed in the future.