I don’t know anything about the backstory here, and am just assuming that the situation is as described in the title of the video.
As soon as I saw the video, there were a few things I thought were worth noting:
- I have no idea if this lady was angry because of the robot, or because of the robot’s inability to help her. If the robot had been competent to help with her problem, would she still have been angry?
- I have no idea if she had unrealistic expectations of the robot, and whether the robot was even capable of helping. Was she asking the robot for her prescription, directions to the toilets, or for a book review of Pride & prejudice?
- I have no idea if she had mental health problems. Was she prone to violent outbursts? Would this have happened to the human receptionist, if the receptionist had been as unhelpful?
- I have no idea how much time she had spent interacting with the robot. Was this the culmination of 30 minutes of dead-end back-and-forth? Or was after a single question that didn’t return the expected response?
- I have no idea if her frustration was the result of her initial reaction with the human receptionist, and the robot was simply an object she could vent on.
In short, given the lack of context, we all see what we want to see in this video. If you believe there’s something special about human connection in healthcare, you might see the robot as a justifiable target for anger at everything wrong in the system. If you believe that robots are tools that have the potential to help us solve pressing problems in healthcare, you might feel sympathy for the robot, while also understanding the frustration being expressed. And if you believe that machines can’t take over from humans soon enough (because we don’t make good choices at societal levels, for example), you may think that the woman should be subject to legal action.
I think this video says more about us and our assumptions, than it does about robots. We jump to conclusions all the time, especially when the conclusions make us feel better about ourselves. This is especially problematic when the situation is ambiguous and the scenario allows for a wide range of explanations; we’re almost always going to choose the explanation that supports our prejudice (confirmation bias).
To test my knee-jerk reactions, I think it’s useful to explore some of the possible counterfactuals.
- Would this lady have vented her anger to a human receptionist who was incompetent and unable to help? Who didn’t understand her questions? Who gave non-nonsensical responses? I think the answer would be, Yes.
- Would she have taken a bat to a competent robot? One who answered her questions perfectly, the first time? One who made her feel like her time had been well-spent? I think the answer would be, No.
Given my responses to the questions above, I don’t believe that the robot is the problem in the video. I think the problem lies at the intersection of complicated relationships between people, technology, and systems. This is the outcome of government funding models for healthcare, unrealistic expectations of robot performance, poor user interface design, challenges with human emotional regulation, and so on. This is not as simple as ‘robot = bad’ or ‘human = good’.
Comments
One response to “Without context, we see what we want to see”
[…] facility. I shared some initial thoughts on first watching the video, where I concluded that we see whatever confirms our existing bias in those kinds of interactions. But I also shared a few thoughts on what may have led to different […]