Medical data: who owns it and what can be done to it?

…most states in the US do not have law to confer specific ownership of medical data to patients, while others put the rights on hospitals and physicians. Of all, only New Hampshire allows patients to legally own their medical records.

Source: Medical data: who owns it and what can be done to it?

A short article that raises some interesting questions. My understanding is that the data belongs to the patient and the media on which the data is stored belongs to the hospital. For example, I own the data generated about my body but the paper folder or computer hard drive belongs to the hospital. That means I can ask the hospital to photocopy my medical folder and give me the copy (or to email me an exported XML data file from whatever EHR system they use) but I can’t take the folder home when I’m discharged.

Things are going to get interesting when AI-based systems are being trained en masse using historical medical records where patients did not give consent for their data to be used for algorithmic training. I believe that the GDPR goes some way towards addressing this issue by stating that, “healthcare providers do not have to seek prior permission from patients to use their data, as long as they observe the professional secrecy act to not identify patients at the individual level”.

Rodney Brooks | Robotics & AI – Their Present & Future

Rodney Brooks was one of the leading developers of AI coding tools throughout the 80s and early 90s at MIT, where he spent a decade running one of the two largest and most prominent AI centres in the world. There are few who can match the breadth, depth, and duration of Rodney’s purview on the tech industry and this makes for a fascinating conversation.

In this podcast, Brooks diverges from fashionable narratives on the risk of super AI risk; the extent to which jobs will be imperiled by automation (he’s more worried about a labor shortage than a job shortage); and the timeline of the rise of self-driving cars (this being intersection of his two domains of foundational expertise: robotics and AI).

See also

Graduates are taking £9k courses to help beat AI interviews for City jobs

Via a webcam, the software remotely asks preliminary-round candidates 20 minutes of questions and brain-teasers, and records eye movements, breathing patterns and any nervous tics. Popular software such as HireVue also scans for emotion and expressions, such as blinks, smiles and frowns, by monitoring the face through the applicant’s front-facing smartphone camera or computer webcam.

Source: Blunden, M. (2018). Graduates are taking £9k courses to help beat AI interviews for City jobs.

Well, that’s just terrifying.

In Beta and sunsetting consumer Google+

Action 1: We are shutting down Google+ for consumers.

This review crystallized what we’ve known for a while: that while our engineering teams have put a lot of effort and dedication into building Google+ over the years, it has not achieved broad consumer or developer adoption, and has seen limited user interaction with apps. The consumer version of Google+ currently has low usage and engagement: 90 percent of Google+ user sessions are less than five seconds.

I don’t think it’s a surprise to anyone that Google+ wasn’t a big hit although I am surprised that they’ve taken the step to shut it down for consumers. And this is the problem with online communities in general; when the decision is made that they’re not cost-effective, they’re shut down regardless of the value they create for community members.

When Ben and I started In Beta last year we decided to use Google+ for our community announcements and have been pretty happy with what we’ve been able to achieve with it. The community has grown to almost 100 members and, while we don’t see much engagement or interaction, that’s not why we started using it. For us, it was to make announcements about planning for upcoming episodes and since we didn’t have a dedicated online space, it made sense to use something that already existed. Now that Google+ is being sunsetted we’ll need to figure out another place to set up the community.

 

Mozilla’s Common Voice project

Any high-quality speech-to-text engines require thousands of hours of voice data to train them, but publicly available voice data is very limited and the cost of commercial datasets is exorbitant. This prompted the question, how might we collect large quantities of voice data for Open Source machine learning?

Source: Branson, M. (2018). We’re intentionally designing open experiences, here’s why.

One of the big problems with the development of AI is that few organisations have the large, inclusive, diverse datasets that are necessary to reduce the inherent bias in algorithmic training. Mozilla’s Common Voice project is an attempt to create a large, multilanguage dataset of human voices with which to train natural language AI.

This is why we built Common Voice. To tell the story of voice data and how it relates to the need for diversity and inclusivity in speech technology. To better enable this storytelling, we created a robot that users on our website would “teach” to understand human speech by speaking to it through reading sentences.

I think that voice and audio is probably going to be the next compter-user interface so this is an important project to support if we want to make sure that Google, Facebook, Baidu and Tencent don’t have a monopoly on natural language processing. I see this project existing on the same continuum as OpenAI, which aims to ensure that “…AGI’s benefits are as widely and evenly distributed as possible.” Whatever you think about the possibility of AGI arriving anytime soon, I think it’s a good thing that people are working to ensure that the benefits of AI aren’t mediated by a few gatekeepers whose primary function is to increase shareholder value.

Most of the data used by large companies isn’t available to the majority of people. We think that stifles innovation. So we’ve launched Common Voice, a project to help make voice recognition open and accessible to everyone. Now you can donate your voice to help us build an open-source voice database that anyone can use to make innovative apps for devices and the web. Read a sentence to help machines learn how real people speak. Check the work of other contributors to improve the quality. It’s that simple!

The datasets are openly licensed and available for anyone to download and use, alongside other open language datasets that Mozilla links to on the page. This is an important project that everyone should consider contributing to. The interface is intuitive and makes it very easy to either submit your own voice or to validate the recordings that other people have made. Why not give it a go?

adapting to constant change

The human work of tomorrow will not be based on competencies best-suited for machines, because creative work that is continuously changing cannot be replicated by machines or code. While machine learning may be powerful, connected human learning is novel, innovative, and inspired.

Source: Jarche, H. (2018). adapting to constant change.

A good post on why learning how to learn is the only reasonable way to think about the future of work (and professional education). The upshot is that Communities of Practice are implicated in helping us adapt to working environments that are constantly changing, as will most likely continue to be the case.

However, I probably wouldn’t take the approach that it’s “us vs machines” because I don’t think that’s where we’re going to end up. I think it’s more likely that those who work closely with AI-based systems will outperform and replace those who don’t. In other words, we’re not competing with machines for our jobs; we’re competing with other people who use machines more effectively than we do.

Trying to be better than machines is not only difficult but our capitalist economy makes it pretty near impossible.

This is both true and a bit odd. No-one thinks they need to be able to do complex mathematics without calculators, and those who are better at using calculators can do more complex mathematics. Why is it such a big leap to realise that we don’t have to be better image classifiers than machines either? Let’s accept that diagnosis from CT will be performed by AI and focus on how that frees up physician time for other human- and patient-centred tasks. What will medical education look like when we’re teaching students that adapting while working with machines is the only way to stay relevant? I think that clinicians who graduate from medical schools who take this approach are more likely to be employed in the future.

Paper Review: the Babylon Chatbot

…it is fantastic that Babylon has undertaken this evaluation, and has sought to present it in public via this conference paper. They are to be applauded for that. One of the benefits of going public is that we can now provide feedback on the study’s strength and weaknesses.

Source: Coiera, E. (2018). Paper Review: the Babylon Chatbot.

There’s been a lot of coverage of Babylon Health recently, with the associated controversy around what this might mean for GPs and patients. However, what might be even more interesting than the claim that a chatbot could replace a GP, is the fact that Babylon is one of the few companies that have published some of their work openly. This is quite unusual in an industry where startups are reluctant to share their methods for fear of exposing their “secret sauce”. But, as the open review by Enrico Coiera demonstrates, publication of methods for peer review and scientific scrutiny is an essential aspect of moving the field of clinical AI forward.

If Artificial Intelligence Only Benefits a Select Few, Everyone Loses

…nations that have begun to prepare for and explore AI will reap the benefits of an economic boom. The report also demonstrates how anyone who hasn’t prepared, especially in developing nations, will be left behind… In the developing world, in the developing countries or countries with transition economies, there is much less discussion of AI, both from the benefit or the risk side.

The growing divide between nations that are prepared for widespread automation and those that aren’t, between companies that can cut costs by replacing workers and the newly unemployed people themselves, puts us on a collision course for conflict and backlash against further developing and deploying AI technology

Source: Robitzski, D. (2018). If Artificial Intelligence Only Benefits a Select Few, Everyone Loses.

A short post that’s drawn mainly from the 64 page McKinsey report (Notes From the Frontier: Modeling the Impact of AI on the World Economy). This is something that I’ve tried to highlight when I’ve talked about this technology to skeptical colleagues; in many cases, AI in the workplace will arrive as a software update and will, therefore, be available in developing, as well as developed countries. This isn’t like buying a new MRI machine where the cost is in the hardware and ongoing support. The existing MRI machine will get an update over the internet and from now on it’ll include analysis of the image and automated reporting. And now the cost of running your radiology department at full staff capacity is starting to look more expensive than it needs to be. This says nothing of the other important tasks that radiologists perform; the fact is that a big component of their daily work includes classifying images, and for human beings, that ship has sailed. While in more developed economies it may be easier to relocate expertise within the same institution, I don’t think we’re going to have that luxury the developing world. If we’re not thinking about these problems today, we’re going to be awfully unprepared when that software update arrives.

a16z Podcast: Revenge of the Algorithms (Over Data)… Go! No?

An interesting (and sane) conversation about the defeat of AlphaGo by AlphaGo Zero. It almost completely avoids the science-fiction-y media coverage that tends to emphasise the potential for artificial general intelligence and instead focuses on the following key points:

  • Go is a stupendously difficult board game for computers to play but it’s a game in which both players have total information and where the rules are relatively simple. This does not reflect the situation in any real-world decision-making scenario. Correspondingly, this is necessarily a very narrow definition of what an intelligent machine can do.
  • AlphaGo Zero represents an order of magnitude improvement in algorithmic modelling and power consumption. In other words, it does a lot more with a lot less.
  • Related to this, AlphaGo Zero started from scratch, with humans providing only the rules of the game. So Zero used reinforcement learning (rather than supervised learning) to figure out the same (and in some cases, better) moves than human beings have done over the last thousand years or so).
  • It’s an exciting achievement but shouldn’t be conflated with any significant step towards machine intelligence that transfers beyond highly constrained scenarios.

Here’s the abstract from the publication in Nature:

A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.

Technology Beyond the Tools

You didn’t need to know about how to print on a printing press in order to read a printed book. Writing implements were readily available in various forms in order to record thoughts, as well as communicate with them. The use was simple requiring nothing more than penmanship. The rapid advancement of technology has changed this. Tech has evolved so quickly and so universally in our culture that there is now literacy required in order for people to effectively and efficiently use it.

Reading and writing as a literacy was hard enough for many of us, and now we are seeing that there is a whole new literacy that needs to be not only learned, but taught by us as well.

Source: Whitby, T. (2018). Technology Beyond the Tools.

I wrote about the need to develop these new literacies in a recent article (under review) in OpenPhysio. From the article:

As clinicians become single nodes (and not even the most important nodes) within information networks, they will need data literacy to read, analyse, interpret and make use of vast data sets. As they find themselves having to work more collaboratively with AI-based systems, they will need the technological literacy that enables them to understand the vocabulary of computer science and engineering that enables them to communicate with machines. Failing that, we may find that clinicians will simply be messengers and technicians carrying out the instructions provided by algorithms.

It really does seem like we’re moving towards a society in which the successful use of technology is, at least to some extent, premised on your understanding of how it works. As educators, it is incumbent on us to 1) know how the technology works so that we can 2) help students use it effectively while at the same time avoid exploitation by for-profit companies.

See also: Aoun, J. (2017). Robot proof: Higher Education in the Age of Artificial Intelligence. MIT Press.

%d bloggers like this: