Categories
AI

Comment: There’s a new obstacle to getting a job after college: Getting approved by AI

Companies may not be ready to outsource vetting candidates for C-Suite and executive positions to algorithms, but the stakes are lower for entry-level roles and internships. That means some of today’s college students are effectively the guinea pigs for a largely unproven mechanism for evaluating applicants.

Metz, R. (2019). There’s a new obstacle to getting a job after college: Getting approved by AI. CNN Business.

I agree with the concern that we don’t have a good idea of how well these algorithms will work when it comes to narrowing the field of potential interviewees for a post. However, I think that it can’t be any worse than what currently happens.

We already know that unstructured interviews by human beings are completely unreliable predictors of future performance (structured interviews seem to work better but the improvements in validity are marginal…better than chance but not by much). What if we find out that AI is at least reliable? At first glance, the idea that an AI-based system will screen candidates to narrow the pool of applicants seems unfair but we already know that being screened and interviewed by a human being is also unfair. So a human interview panel is likely to be both invalid and unreliable, whereas a computer might at least be reliable. Although I suspect the AI will also be a better predictor of performance than human beings, because it’ll probably be less likely to be influenced by irrelevant factors.

For me, this seems to be another example of having different expectations for outcomes, where an AI has to be perfect but a human being gets a pass. Self-driving cars are the same; they have to demonstrate near perfect reliability, whereas human drivers are responsible for the preventable deaths of tens of thousands of poeple every year.

Categories
AI

Comment: Self-driving Mercedes will be programmed to sacrifice pedestrians to save the driver.

As we dig deeper, it seems that the problems faced by driverless cars and by human drivers are much the same. We try to avoid crashes and collisions, and we have to make split-second decisions when we can’t. Those decisions are governed by our programming and experience. The differences are that computers can think a lot faster, but they can also avoid many crashes that a human driver wouldn’t have. These differences pull in different directions, but they don’t cancel each other out.

Sorrel, C. (2019). Self-Driving Mercedes Will Be Programmed To Sacrifice Pedestrians To Save The Driver. Fast Company.

Initially I thought that this was a presumptuous decision but after thinking about it for a few seconds, I realised that this is exactly what I would do if I was the driver. And given that I’d likely have my family in the car, I’d double down on this choice. Regardless of how many different scenarios you come up with where it makes sense to sacrifice the vehicle occupants, the reality is that human drivers are making these choices every day, and we’re simply not capable of doing the calculations in real time. We’re going to react to save ourselves, every time.

Car manufacturers and software engineers should just programme the car to save the driver, regardless of the complexity of the scenario because this is what humans do, and we’re fine with it.

Categories
AI clinical

Algorithmic de-skilling of clinical decision-makers

What will we do when we don’t drive most of the time but have a car that hands control to us during an extreme event?

Agrawal, A., Gans, J. & Goldfarb, A. (2018). Prediction Machines: The Simple Economics of Artificial Intelligence.

Before I get to the takehome message, I need to set this up a bit. The way that machine intelligence currently works is that you train an algorithm to recognise patterns in large data sets, often with the help of people who annotate the data in advance. This is known as supervised learning. Sometimes the algorithm can be given data sets that have no annotation (i.e. no supervision), and the output is judged against some criterion and determined to be more or less accurate. This is known as reinforcement learning.

In both cases, the algorithm isn’t trained in the wild but is rather developed within a constrained environment that simulates something of interest in the real world. For example, an algorithm may be trained to deal with uncertainty by playing Starcraft, which mimics the imperfect information state of real-world decision-making. This kind of probabilistic thinking defines many professional decision-making contexts where we have to make a choice but may only be 70% confident that we’re making the right choice.

Eventually, you need to take the algorithm out of the simulated training environment and run it in the real world because this is the only way to find out if it will do what you want it to. In the context of self-driving cars, this represents a high-stakes tradeoff between the benefits of early implementation (more real-world data gathering, more accurate predictions, better autonomous driving capability), and the risks of making the wrong decision (people might die).

Even in a scenario where the algorithm has been trained to very high levels in simulation and then introduced at precisely the right time so as to maximise the learning potential while also minimising risk, it will still hardly ever have been exposed to rare events. We will be in the situation where cars will have autonomy in almost all driving contexts, except those where there is a real risk of someone being hurt or killed. At that moment, because of the limitations of its training, it will hand control of the vehicle back to the driver. And there is the problem. How long will it take for drivers to lose the skills that are necessary for them to make the right choice in that rare event?

Which brings me to my point. Will we see the same loss of skills in the clinical context? Over time, algorithms will take over more and more of our clinical decision-making in much the same way that they’ll take over the responsibilities of a driver. And in almost all situations they’ll make more accurate predictions than a person. However, in some rare cases, the confidence level of the prediction will drop enough to lead to control being handed back to the clinician. Unfortunately, at this point, the clinician likely hasn’t been involved in clinical decision-making for an extended period and so, just when human judgement is determined to be most important, it may also be at it’s most limited.

How will clinicians maintain their clinical decision-making skills at the levels required to take over in rare events, when they are no longer involved in the day-to-day decision-making that hones that same skill?


18 March 2019 Update: The Digital Doctor: Will surgeons lose their skills in the age of automation? AI Med.

Categories
AI

Prof Allan Dafoe on trying to prepare the world for the possibility that AI will destabilise global politics

…even if we stopped at today’s AI technology and simply collected more data, built more sensors, and added more computing capacity, extreme systemic risks could emerge, including:

1) Mass labor displacement, unemployment, and inequality; 2)The rise of a more oligopolistic global market structure, potentially moving us away from our liberal economic world order; 3)Imagery intelligence and other mechanisms for revealing most of the ballistic missile-carrying submarines that countries rely on to be able to respond to nuclear attack; 4)Ubiquitous sensors and algorithms that can identify individuals through face recognition, leading to universal surveillance; 5)Autonomous weapons with an independent chain of command, making it easier for authoritarian regimes to violently suppress their citizens.

Source: Wiblin, R. (2018). Prof Allan Dafoe on trying to prepare the world for the possibility that AI will destabilise global politics.

This is one of those things that isn’t intuitive but at the same time is obviously true. Even if all we do going forward is improve what we already have (e.g. cheaper, faster, more powerful computation, sensors, etc.) we could brute force our way to a vastly different society. It’s easy to make fun of all the ways that self-driving cars, natual language processing, and recommendation systems  aren’t as good as humans. But think about the fact that we have self-driving cars, NLP and recommendation systems. These things may not be perfect today but they didn’t exist 10 years ago. In a decade we’ve gone from, “This is impossible”, to “This isn’t perfect”. Unless technological development comes to a complete standstill (note: this would require some kind of apocalyptic event), machine learning by itself will transform society using nothing more advanced than larger data sets and more powerful computation.

Categories
AI clinical

Defensive Diagnostics: the legal implications of AI in radiology

Doctors are human. And humans make mistakes. And while scientific advancements have dramatically improved our ability to detect and treat illness, they have also engendered a perception of precision, exactness and infallibility. When patient expectations collide with human error, malpractice lawsuits are born. And it’s a very expensive problem.

Source: Defensive Diagnostics: the legal implications of AI in radiology

There are few things to note in this article. The first, and most obvious, was that we have a much higher standard for AI-based expert systems (i.e. algorithmic diagnosis and prediction) than we do for human experts. Our expectations for algorithmic clinical decision-making are far more exacting than those we have for physicians. It seems strange that we accept the fallibility of human beings but expect nothing less than perfection from AI-based systems. [1]

Medical errors are more frequent than anyone cares to admit. In radiology, the retrospective error rate is approximately 30% across all specialities, with real-time error rates in daily practice averaging between 3% and 5%.

The second takeaway was that one of the most significant areas of influence for AI in clinical settings may not be in the primary diagnosis but rather the follow up analysis that  highlights potential mistakes that the clinician may have made. These applications of AI for secondary diagnostic review will be cheap and won’t add any additional workload to healthcare professionals. They will simply review the clinician’s conclusion and flag those cases that may benefit from additional testing. Of course, this will probably be driven by patient litigation.


[1] Incidentally, the same principle seems to be true for self-driving cars; we expect nothing but a perfect safety record for autonomous vehicles but are quite happy with the status quo for human drivers (1.2 million traffic-related deaths in a single year). Where is the moral panic around the mass slaughter of human beings by human drivers? If an algorithm is only slightly safer than a human being behind the wheel of a car it would result in thousands fewer deaths per year. And yet it feels like we’re going to delay the introduction of autonomous cars until they meet some perfect standard. To me at least, that seems morally wrong.

Categories
AI

The first AI disruption in medicine might not be radiology

Over 90% of traffic accidents are caused by human error. Whether it is drink driving, inattention, speeding, straight up bad driving, or any other of a myriad of reasons why people crash their cars, the fact is humans are really dangerous on the road. In the USA we average an accident every 200,000 to 500,000 miles, with a fatality every 200 to 500 accidents. This rate can be as high as 3 or 4 times as high in other parts of the world.

Source: The first AI disruption in medicine might not be radiology

The bulk of this article sets up the argument that 1) machine learning is enabling the rapid development of self-driving cars, and that 2) this will lead to the implementation of self-driving taxi services, which will 3) significantly reduce injuries because self-driving cars are already much safer than humans.

The implication for the author is that this will have a significant effect on trauma surgeons but I think it should be obvious that this will affect a wide range of clinical specialists. For example, how much physiotherapy work comes from the orthopaedic, spinal and head injuries caused by motor vehicle accidents? As self-driving cars cause the incidence of MVAs to drop suddenly, possibly by as much as 90%,  “it may be that a whole range of specialties will notice their patients have just stopped coming to see them”. Perhaps this will be the first real impact of AI on physiotherapy.

Categories
reading

I enjoyed reading (January)

This post is also a bit delayed, but I’m OK with that. During January I found myself reading a bit more than usual about robots, androids, augmented reality and related topics. I’m not sure why it worked out that way, but this collection is more or less representative of what I found interesting during that time. Interestingly, I realised that a common thread throughout this theme are that they’re pretty much related to three books by Daniel Suarez; Daemon, Freedom, and Kill Decision. If you enjoy this kind of thing, you have to read them.

I, Glasshole: My Year With Google Glass (Mat Honan): I’m fascinated with the concept of wearable, context-aware devices and services, of which Glass is simply the most well-known. I think that the ability to overlay digital information on top of the reality we perceive represents an astounding change in how we experience the world.

For much of 2013, I wore the future across my brow, a true Glasshole peering uncertainly into the post-screen world. I’m not out here all alone, at least not for long. The future is coming to your face too. And your wrist. Hell, it might even be in your clothes. You’re going to be wearing the future all over yourself, and soon. When it comes to wearable computing, it’s no longer a question of if it happens, only when and why and can you get in front of it to stop it with a ball-pein hammer? (Answers: Soon. Because it is incredibly convenient. Probably not.) In a few years, we might all be Glassholes. But in 2013, maybe for the last time, I was in dubiously exclusive face-computing company.

Robots of death, robots of love: The reality of android soldiers and why laws for robots are doomed to failure (Steve Ranger): The idea of fully autonomous robots that are able to make decisions in critical situations is both disturbing and appealing to me. Disturbing because embedding a moral framework that can deal with the complexity of warfare is ethically problematic. Appealing because in many situations, robots may actually be able to make better decisions than human beings (think of self-driving cars).

While fully autonomous robot weapons might not be deployed for two or three decades, the International Committee for Robot Arms Control (ICRAC), an international group of academics and experts concerned about the implications of a robot arms race, argues a prohibition on the development and deployment of autonomous weapons systems is the correct approach. “Machines should not be allowed to make the decision to kill people,” it states.

Better Than Human: Why Robots Will — And Must — Take Our Jobs (Kevin Kelly): Kevin Kelly’s article, We are the web, was one of the first things I read that profoundly changed the way I think about the internet. Needless to say, I almost always find his thoughts on technology to be insightful and thought-provoking.

All the while, robots will continue their migration into white-collar work. We already have artificial intelligence in many of our machines; we just don’t call it that. Witness one piece of software by Narrative Science (profiled in issue 20.05) that can write newspaper stories about sports games directly from the games’ stats or generate a synopsis of a company’s stock performance each day from bits of text around the web. Any job dealing with reams of paperwork will be taken over by bots, including much of medicine. Even those areas of medicine not defined by paperwork, such as surgery, are becoming increasingly robotic. The rote tasks of any information-intensive job can be automated. It doesn’t matter if you are a doctor, lawyer, architect, reporter, or even programmer: The robot takeover will be epic.

And it has already begun.

A review of Her (Ray Kurzweil): Kurweil’s thinking on the merging of human beings with technology is fascinating. If you’re interested in this topic, the collection of essays on his blog is awesome.

With emerging eye-mounted displays that project images onto the wearer’s retinas and also look out at the world, we will indeed soon be able to do exactly that. When we send nanobots into the brain — a circa-2030s scenario by my timeline — we will be able to do this with all of the senses, and even intercept other people’s emotional responses.