Categories
AI

Comment: A machine may not take your job, but one could become your boss.

The goal of automation has always been efficiency, but in this new kind of workplace, A.I. sees humanity itself as the thing to be optimized.

Roose, K. (2019). A Machine May Not Take Your Job, but One Could Become Your Boss. The New York Times.

There’s a lot going on in this article, some of which I agree with and some of which I think is not useful. For me, the takeaway is that AI-based systems in the workplace really have the potential to improve our interactions with each other, but that there will be powerful incentives to use them for surveillance of employees.

The article focuses on software that analyses the conversation between call centre agents and customers and provides on screen guidance to the agent on how to “improve” the quality of the interaction. Using natural language processing to provide real-time feedback to call centre workers is, in my opinion, more like coaching than having an AI as “your boss”. We’re all biased, forgetful, get tired, have bad days, etc. and I think that a system that helped me to get around those issues would be useful.

The article presents this as some kind of dystopia where our decision making (or performance, or behaviour) will be subject to algorithmic manipulation. There are two things to note here:

  1. We’re already subject to algorithmic manipulation (see Twitter, Netflix, email marketing, etc.);
  2. Sometimes I want my performance to be optimised. When I’m running I get constant feedback on my pace, heart rate, distance, etc. all of which give me a sense of whether or not I’m working in an optimal zone for improving my cardiac fitness. Then I choose whether or not to adjust my pace, based on that real-time feedback.

At the end of every call…the notifications are tallied and added to a statistics dashboard that his supervisor can view. If he hides the window by minimizing it, the program notifies his supervisor.

Having said that, there are other aspects of the programme that move us into a more problematic scenario, which is where your performance and behaviours (e.g. did you minimise the feedback and ignore it) are reported to a supervisor, which may or may not influence your continued employment. This feels more like surveillance than coaching, where employees are less likely to use the system to improve their performace, and more likely to figure out how to avoid the punishment. When the aim of the system is to improve the relationship or interaction with customers it’s easier to get behind. But when it moves into judgement, it becomes more difficult to support.

This brings me to another aspect of the story that’s problematic; when algorithms evaluate performance against a set of metrics that are undefined or invisible to the user (i.e. you don’t know what you’re being compared to) and then the algorithm makes a decision independently that has a real world consequence (e.g. you get fired because you’re “underperforming”). If supervisors regard the information from the system as representing some kind of ground truth and use it for their own decision making, it’s likely to have negative consequences. For example, when employees are ranked from “most productive” to “least productive” based on some set of criteria that were easy to optimise for but which may have limited validity, and this output is simpley accepted as “the truth”, then it will essentially be the system making the decision rather than the supervisor.

But framing the problem as if it’s the algorithms that are the issue – “automated systems can dehumanize and unfairly punish employees” – misses the point that it’s human beings who are actually acting with agency in the real world. Unless we’re able to help people figure out how to use the information provided by algorithms, and understand that they don’t represent ground truth, we’re going to see more and more examples of people being taken out of the loop, with damaging consequences.

There’s another aspect of the story that I found worrying and it’s about the relationship between training data and user behaviour. In the example of the AI system that gives the user feedback on the quality of the conversation with customer, the system uses different criteria to come up with an empathy score. When the agent scores low on empathy, the system suggests that they need to be more empathic. However, the way to do this is, apparently, to “mirror the customers mood”, which seems problematic for a few reasons:

  1. If the customer is angry, should the agent reflect that anger back to them?
  2. How do you determine the customer’s and agent’s moods?
  3. Savvy employees will focus on getting higher empathy scores by using a check list to work through the variables that the AI uses to calculate the score. But as supervisor you don’t care about the empathy score, you care about satisfied customers. (See this earlier post about encouraging people to aim for higher scores on metrics, rather than the actual outcomes you care about).

Using AI to correct for human biases is a good thing. But as more AI enters the workplace, executives will have to resist the temptation to use it to tighten their grip on their workers and subject them to constant surveillance and analysis.

Roose, K. (2019). A Machine May Not Take Your Job, but One Could Become Your Boss. The New York Times.

See also Comment: ‘Robots’ are not ‘coming for your job’ – Management is.

Categories
AI clinical

Comment: In competition, people get discouraged by competent robots

After each round, participants filled out a questionnaire rating the robot’s competence, their own competence and the robot’s likability. The researchers found that as the robot performed better, people rated its competence higher, its likability lower and their own competence lower.

Lefkowitz, M. (2019). In competition, people get discouraged by competent robots. Cornell Chronicle.

This is worth noting since it seems increasingly likely that we’ll soon be working, not only with more competent robots but also with more competent software. There are already concerns around how clinicians will respond to the recommendations of clinical decision-support systems, especially when those systems make suggestions that are at odds with the clinician’s intuition.

Paradoxically, the effect may be even worse with expert clinicians who may not always be able to explain their decision-making. Novices, who use more analytical frameworks (or even basic algorithms like, IF this, THEN that) may find it easier to modify their decisions because their reasoning is more “visible” (System 2). Experts, who rely more on subconscious pattern recognition (System 1), may be less able to identify where in their reasoning process they were victim to confounders like confirmation or availability bia, and so less likely to modify their decisions.

It seems really clear that we need to start thinking about how we’re going to prepare current and future clinicians for the arrival of intelligent agents in the clinical context. If we start disregarding the recommendations of clinical decision support systems, not because they produce errors in judgement but because we simply don’t like them, then there’s a strong case to be made that it is the human that we cannot trust.


Contrast this with automation bias, which is the tendency to give more credence to decisions made by machines because of a misplaced notion that algorithms are simply more trustworthy than people.

Categories
AI

The next generation of AI assistants in enterprise

AI assistants can be applied both for direct customer service and within the operations of an organization. AI that understands customers, context, and that can be proactive will lead to automation of many repetitive tasks.

Source: Nichol, A. (2018). The next generation of AI assistants in enterprise.

  • Level 1 – Notification: Simple notifications on your phone.
  • Level 2 – FAQ: The most common type of assistant at the moment, allowing a user to ask a simple question and get a response.
  • Level 3 – Contextual: Context matters: what the user has said before, when / where / how she said it, and so on. Considering context also means being capable of understanding and responding to different and unexpected inputs. This is on the horizon (see the Google Duplex demo below).
  • Level 4 – Personalised: AI assistants will start to learn more about their users, taking the initiative and begin acting on behalf of the user.
  • Level 5 – Autonomous: Groups of AI assistants that know every customer personally and eventually run large parts of company operations.

For an in-depth discussion of this topic, see Chapter 11 (Actors and Agents) in Frankish & Ramsey’s Cambridge Handbook of Artificial Intelligence.

Categories
AI

What can machine learning do? Workforce implications | Science

Although recent advances in the capabilities of machine learning (ML) systems are impressive, they are not equally suitable for all tasks… .We identify eight key criteria that help distinguish successful ML tasks from tasks where ML is less likely to be successful.

  1. Learning a function that maps well-defined inputs to well-defined outputs.
  2. Large (digital) data sets exist or can be created containing input-output pairs.
  3. The task provides clear feedback with clearly definable goals and metrics.
  4. No long chains of logic or reasoning that depend on diverse background knowledge or common sense.
  5. No need for detailed explanation of how the decision was made
  6. A tolerance for error and no need for provably correct or optimal solutions
  7. The phenomenon or function being learned should not change rapidly over time
  8. No specialized dexterity, physical skills, or mobility required

Source: What can machine learning do? Workforce implications | Science