Categories
AI

Comment: A machine may not take your job, but one could become your boss.

The goal of automation has always been efficiency, but in this new kind of workplace, A.I. sees humanity itself as the thing to be optimized.

Roose, K. (2019). A Machine May Not Take Your Job, but One Could Become Your Boss. The New York Times.

There’s a lot going on in this article, some of which I agree with and some of which I think is not useful. For me, the takeaway is that AI-based systems in the workplace really have the potential to improve our interactions with each other, but that there will be powerful incentives to use them for surveillance of employees.

The article focuses on software that analyses the conversation between call centre agents and customers and provides on screen guidance to the agent on how to “improve” the quality of the interaction. Using natural language processing to provide real-time feedback to call centre workers is, in my opinion, more like coaching than having an AI as “your boss”. We’re all biased, forgetful, get tired, have bad days, etc. and I think that a system that helped me to get around those issues would be useful.

The article presents this as some kind of dystopia where our decision making (or performance, or behaviour) will be subject to algorithmic manipulation. There are two things to note here:

  1. We’re already subject to algorithmic manipulation (see Twitter, Netflix, email marketing, etc.);
  2. Sometimes I want my performance to be optimised. When I’m running I get constant feedback on my pace, heart rate, distance, etc. all of which give me a sense of whether or not I’m working in an optimal zone for improving my cardiac fitness. Then I choose whether or not to adjust my pace, based on that real-time feedback.

At the end of every call…the notifications are tallied and added to a statistics dashboard that his supervisor can view. If he hides the window by minimizing it, the program notifies his supervisor.

Having said that, there are other aspects of the programme that move us into a more problematic scenario, which is where your performance and behaviours (e.g. did you minimise the feedback and ignore it) are reported to a supervisor, which may or may not influence your continued employment. This feels more like surveillance than coaching, where employees are less likely to use the system to improve their performace, and more likely to figure out how to avoid the punishment. When the aim of the system is to improve the relationship or interaction with customers it’s easier to get behind. But when it moves into judgement, it becomes more difficult to support.

This brings me to another aspect of the story that’s problematic; when algorithms evaluate performance against a set of metrics that are undefined or invisible to the user (i.e. you don’t know what you’re being compared to) and then the algorithm makes a decision independently that has a real world consequence (e.g. you get fired because you’re “underperforming”). If supervisors regard the information from the system as representing some kind of ground truth and use it for their own decision making, it’s likely to have negative consequences. For example, when employees are ranked from “most productive” to “least productive” based on some set of criteria that were easy to optimise for but which may have limited validity, and this output is simpley accepted as “the truth”, then it will essentially be the system making the decision rather than the supervisor.

But framing the problem as if it’s the algorithms that are the issue – “automated systems can dehumanize and unfairly punish employees” – misses the point that it’s human beings who are actually acting with agency in the real world. Unless we’re able to help people figure out how to use the information provided by algorithms, and understand that they don’t represent ground truth, we’re going to see more and more examples of people being taken out of the loop, with damaging consequences.

There’s another aspect of the story that I found worrying and it’s about the relationship between training data and user behaviour. In the example of the AI system that gives the user feedback on the quality of the conversation with customer, the system uses different criteria to come up with an empathy score. When the agent scores low on empathy, the system suggests that they need to be more empathic. However, the way to do this is, apparently, to “mirror the customers mood”, which seems problematic for a few reasons:

  1. If the customer is angry, should the agent reflect that anger back to them?
  2. How do you determine the customer’s and agent’s moods?
  3. Savvy employees will focus on getting higher empathy scores by using a check list to work through the variables that the AI uses to calculate the score. But as supervisor you don’t care about the empathy score, you care about satisfied customers. (See this earlier post about encouraging people to aim for higher scores on metrics, rather than the actual outcomes you care about).

Using AI to correct for human biases is a good thing. But as more AI enters the workplace, executives will have to resist the temptation to use it to tighten their grip on their workers and subject them to constant surveillance and analysis.

Roose, K. (2019). A Machine May Not Take Your Job, but One Could Become Your Boss. The New York Times.

See also Comment: ‘Robots’ are not ‘coming for your job’ – Management is.