Comment: In competition, people get discouraged by competent robots

After each round, participants filled out a questionnaire rating the robot’s competence, their own competence and the robot’s likability. The researchers found that as the robot performed better, people rated its competence higher, its likability lower and their own competence lower.

Lefkowitz, M. (2019). In competition, people get discouraged by competent robots. Cornell Chronicle.

This is worth noting since it seems increasingly likely that we’ll soon be working, not only with more competent robots but also with more competent software. There are already concerns around how clinicians will respond to the recommendations of clinical decision-support systems, especially when those systems make suggestions that are at odds with the clinician’s intuition.

Paradoxically, the effect may be even worse with expert clinicians who may not always be able to explain their decision-making. Novices, who use more analytical frameworks (or even basic algorithms like, IF this, THEN that) may find it easier to modify their decisions because their reasoning is more “visible” (System 2). Experts, who rely more on subconscious pattern recognition (System 1), may be less able to identify where in their reasoning process they were victim to confounders like confirmation or availability bia, and so less likely to modify their decisions.

It seems really clear that we need to start thinking about how we’re going to prepare current and future clinicians for the arrival of intelligent agents in the clinical context. If we start disregarding the recommendations of clinical decision support systems, not because they produce errors in judgement but because we simply don’t like them, then there’s a strong case to be made that it is the human that we cannot trust.


Contrast this with automation bias, which is the tendency to give more credence to decisions made by machines because of a misplaced notion that algorithms are simply more trustworthy than people.

The next generation of AI assistants in enterprise

AI assistants can be applied both for direct customer service and within the operations of an organization. AI that understands customers, context, and that can be proactive will lead to automation of many repetitive tasks.

Source: Nichol, A. (2018). The next generation of AI assistants in enterprise.

  • Level 1 – Notification: Simple notifications on your phone.
  • Level 2 – FAQ: The most common type of assistant at the moment, allowing a user to ask a simple question and get a response.
  • Level 3 – Contextual: Context matters: what the user has said before, when / where / how she said it, and so on. Considering context also means being capable of understanding and responding to different and unexpected inputs. This is on the horizon (see the Google Duplex demo below).
  • Level 4 – Personalised: AI assistants will start to learn more about their users, taking the initiative and begin acting on behalf of the user.
  • Level 5 – Autonomous: Groups of AI assistants that know every customer personally and eventually run large parts of company operations.

For an in-depth discussion of this topic, see Chapter 11 (Actors and Agents) in Frankish & Ramsey’s Cambridge Handbook of Artificial Intelligence.

What can machine learning do? Workforce implications | Science

Although recent advances in the capabilities of machine learning (ML) systems are impressive, they are not equally suitable for all tasks… .We identify eight key criteria that help distinguish successful ML tasks from tasks where ML is less likely to be successful.

  1. Learning a function that maps well-defined inputs to well-defined outputs.
  2. Large (digital) data sets exist or can be created containing input-output pairs.
  3. The task provides clear feedback with clearly definable goals and metrics.
  4. No long chains of logic or reasoning that depend on diverse background knowledge or common sense.
  5. No need for detailed explanation of how the decision was made
  6. A tolerance for error and no need for provably correct or optimal solutions
  7. The phenomenon or function being learned should not change rapidly over time
  8. No specialized dexterity, physical skills, or mobility required

Source: What can machine learning do? Workforce implications | Science