AI clinical

a16z Podcast: Putting AI in Medicine, in Practice

A wide-ranging conversation on several different aspects of AI in medicine. Some of the key takeaways for me included:

  • AI (in it’s current form) has some potential for long-term prediction (e.g. you have an 80% chance of developing diabetes in the next 10 years) but we’re still very far from accurate short-term prediction (e.g. you’re at risk of having a heart attack in the next 3 days).
  • Data flowing from wearable technology (e.g. Fitbits) are difficult for doctors to work with (if they even get access to the data); poor classifiers, missing data, noisy, etc.
  • Diagnosis in AI systems works really well in closed-loop systems e.g. ECG, X-ray, MRI, etc. In these situations the image interpretation doesn’t depend on context, which makes AI-based technology really accurate in the absence of additional EHR data.
  • The use of AI to analyse data may not be the biggest problem to overcome. It may be more difficult to collect data by integrating enough sensors into the environment that can gather data across populations. Imagine tiles in the bathroom that record weight, BP, HR, etc. This would significantly affect our ability to gather useful metrics over time without needing people to remember to put on their Fitbit, for example.
  • In theory, AI doesn’t have to be perfect; it only has to get to the same as human-level errors. Society will need to decide if it’s OK with machines being as good as people, or whether we’ll set the standard for machine diagnosis higher than we’d expect for people.
  • It probably won’t be all or nothing when it comes to AI-integration; we’ll have different levels for using AI in healthcare, much like we have different levels of autonomy with self-driving cars.
  • We may be more comfortable with machine error when the AI is making decisions that are impossible for human doctors to make. For example, wearables will generate about 2 trillion data points in 2018, which cannot be analysed by any team of humans. In those cases, mistakes may be more forgivable than in situations when the AI is reproducing a task that humans perform relatively well.
  • Healthcare startups may start offering complete vertical stacks for specific patient populations. For example, your employer may decide that for all of their employees who are diagnosed with diabetes, they will insure you with a company that offers an integrated service for each stage of managing that condition.