Doctors are burning out because electronic medical records are broken

For all the promise that digital records hold for making the system more efficient—and the very real benefit these records have already brought in areas like preventing medication errors—EMRs aren’t working on the whole. They’re time consuming, prioritize billing codes over patient care, and too often force physicians to focus on digital recordkeeping rather than the patient in front of them.

Source: Minor, L. (2017). Doctors are burning out because electronic medical records are broken.

I’ve read some physicians can spend up to 60% of their day capturing patient information in the EHR. And this isn’t because there’s a lot of information. It’s often down to confusing user interfaces, misguided approaches to security (e.g. having to enter multiple different passwords and a lack of off-site access), and poor design that results in physicians capturing more information than necessary.

There’s interest in using natural language processing to analyse recorded conversation between clinicians and colleagues/patients and while the technology is still unsuitable for mainstream use, it seems likely that it will continue improving until it is.

Also, consider reading:

Lip-reading artificial intelligence could help the deaf—or spies | Science | AAAS

The researchers started with 140,000 hours of YouTube videos of people talking in diverse situations. Then, they designed a program that created clips a few seconds long with the mouth movement for each phoneme, or word sound, annotated. The program filtered out non-English speech, nonspeaking faces, low-quality video, and video that wasn’t shot straight ahead. Then, they cropped the videos around the mouth. That yielded nearly 4000 hours of footage, including more than 127,000 English words.

After training, the researchers tested their system on 37 minutes of video it had not seen before. The AI misidentified only 41% of the words… That might not sound like a lot, but the best previous computer method, which focuses on individual letters rather than phonemes, had a word error rate of 77%. In the same study, professional lip readers erred at a rate of 93% (though in real life they have context and body language to go on, which helps).

Source: Lip-reading artificial intelligence could help the deaf—or spies | Science | AAAS

There’s not much else to say here, other than to highlight one of the potential applications in healthcare. For example, patients who are hard of hearing could have a universal translator with them at all times. In a country like South Africa where we have a Constitution that mandates the provision of healthcare in a language of the patient’s choosing, but where we have 12 official languages and a huge shortage of translators, you can see how this might be useful.