Comment: How do we learn to work with intelligent machines?

I discussed something related to this earlier this year (the algorithmic de-skilling of clinicians) and thought that this short presentation added something extra. It’s not just that AI and machine learning have the potential to create scenarios in which qualified clinical experts become de-skilled over time; they will also impact on our ability to teach and learn those skills in the first place.

We’re used to the idea of a novice working closely with a more experienced clinician, and learning from them through observation and questioning (how closely this maps onto reality is a different story). When the tasks usually performed by more experienced clinicians are outsourced to algorithms, who does the novice learn from?

Will clinical supervision consist of talking undergraduate students through the algorithmic decision-making process? Discussing how probabilistic outputs were determined from limited datasets? How to interpret confidence levels of clinical decision-support systems? When clinical decisions are made by AI-based systems in the real-world of clinical practice, what will we lose in the undergraduate clinical programme, and how do we plan on addressing it?

Book chapter published: Shaping our algorithms before they shape us

I’ve just had a chapter published in an edited collection entitled: Artificial Intelligence and Inclusive Education: Speculative Futures and Emerging Practices. The book is edited by Jeremy Knox, Yuchen Wang and Michael Gallagher and is available here.

Here’s the citation: Rowe M. (2019) Shaping Our Algorithms Before They Shape Us. In: Knox J., Wang Y., Gallagher M. (eds) Artificial Intelligence and Inclusive Education. Perspectives on Rethinking and Reforming Education. Springer, Singapore. https://doi.org/10.1007/978-981-13-8161-4_9.

And here’s my abstract:

A common refrain among teachers is that they cannot be replaced by intelligent machines because of the essential human element that lies at the centre of teaching and learning. While it is true that there are some aspects of the teacher-student relationship that may ultimately present insurmountable obstacles to the complete automation of teaching, there are important gaps in practice where artificial intelligence (AI) will inevitably find room to move. Machine learning is the branch of AI research that uses algorithms to find statistical correlations between variables that may or may not be known to the researchers. The implications of this are profound and are leading to significant progress being made in natural language processing, computer vision, navigation and planning. But machine learning is not all-powerful, and there are important technical limitations that will constrain the extent of its use and promotion in education, provided that teachers are aware of these limitations and are included in the process of shepherding the technology into practice. This has always been important but when a technology has the potential of AI we would do well to ensure that teachers are intentionally included in the design, development, implementation and evaluation of AI-based systems in education.

Comment: Could robots make us better humans?

This is one of his arguments for listening to AI-generated music, studying how computers do maths and…gazing at digitally produced paintings: to understand how advanced machines work at the deepest level, in order to make sure we know everything about the technology that is now built into our lives.

Harris, J. (2019). Could robots make us better humans? The Guardian.

Putting aside the heading that conflates “robots” with “AI” there are several insightful points worth noting in this Guardian interview with Oxford-based mathematician and musician, Marcus du Sautoy. I think it’ll be easiest if I just work through the article and highlight them in the order that they appear.

1. “My PhD students seem to have to spend three years just getting to the point where they understand what’s being asked of them…”: It’s getting increasingly difficult to make advances in a variety of research domains. The low-hanging fruit has been picked and it subsequently takes longer and longer to get to the forefront of knowledge in any particular area. At some point, making progress in any scientific endeavor is going to require so much expertise that no single human being will be able to contribute much to the overall problem.

2. “I have found myself wondering, with the onslaught of new developments in AI, if the job of mathematician will still be available to humans in decades to come. Mathematics is a subject of numbers and logic. Isn’t that what computers do best?”: On top of this, we also need to contend with the idea that advances in AI seem to indicate that some of these systems are able to develop innovations in what we might consider to be deeply human pursuits. Whether we call this creativity or something else, it’s clear that AI-based systems are providing earlier insights into problems that we may have eventually arrived at ourselves, albeit at some distant point in the future.

3. “I think human laziness is a really important part of finding good, new ways to do things…”: Even in domains of knowledge that seem to be dominated by computation, there is hope in the idea that working together, we may be able to develop new solutions to complex problems. Human beings often look for shortcuts when faced with inefficiency or boredom, something that AI-based systems are unlikely to do because they can simply brute force their way through the problem. Perhaps a combination of a human desire to take the path of least resistance, combined with the massive computational resources that an AI could bring to bear, would result in a solution that’s beyond the capacity of either working in isolation.

4. “Whenever I talk about maths and music, people get very angry because they think I’m trying to take the emotion out of it…”: Du Sautoy suggests that what we’re responding to in creative works of art isn’t an innately emotional thing. Rather, there’s a mathematical structure that we recognise first, and the emotion comes later. If that’s true, then there really is nothing in the way of AI-based systems not only creating beautiful art (they already do that) but of creating art that moves us.

5. “We often behave too like machines. We get stuck. I’m probably stuck in my ways of thinking about mathematical problems”: If it’s true that AI-based systems may open us up to new ways of thinking about problems, we may find that working in collaboration with them makes us – perhaps counterintuitively – more human. If we keep asking what it is that makes us human, and let machines take on the tasks that don’t fit into that model, it may provide space for us to expand and develop those things that we believe make us unique. Rather than competing on computation and reason, what if we left those things to machines, and looked instead to find other ways of valuing human capacity?

Comment: Lessons learned building natural language processing systems in health care

Many people make the mistake of assuming that clinical notes are written in English. That happens because that’s how doctors will answer if you ask them what language they use.

Talby, D. (2019). Lessons learned building natural language processing systems in health care. O’Reilly.

This is an interesting post making the point that medical language – especially when written in clinical notes – is not the same as other, more typical, human languages. This is important to recognise in the context of training natural language processing (NLP) models in the healthcare context because medical languages have different vocabularies, grammatical structure, and semantics. Trying to get an NLP system to “understand”* medical language is a fundamentally different problem to understanding other languages.

The lessons from this article are slightly technical (although not difficult to follow) and do a good job highlighting why NLP in health systems is seeing slower progress than the NLP running on your phone. You may think that, since Google Translate does quite well translating between English and Spanish, for example, it should also be able to translate between English and “Radiography”. This article explains why that problem is not only harder than “normal” translation, but also different.

* Note: I’m saying “understand” while recognising that current NLP systems understand nothing. They’re statistically modelling the likelihood that certain words follow certain other words and have no concept of what those words mean.

Comment: Training a single AI model can emit as much carbon as five cars in their lifetimes

The results underscore another growing problem in AI, too: the sheer intensity of resources now required to produce paper-worthy results has made it increasingly challenging for people working in academia to continue contributing to research. “This trend toward training huge models on tons of data is not feasible for academics…because we don’t have the computational resources. So there’s an issue of equitable access between researchers in academia versus researchers in industry.”

Hao, K. (2019). Training a single AI model can emit as much carbon as five cars in their lifetimes. MIT Technology Review.

The article focuses on the scale of the financial and environmental cost of training natural language processing (NLP) models, comparing the carbon emissions of various AI models to those of a car throughout its lifetime. To be honest, this isn’t something I’ve given much thought to but to see it visually really drives the point home.

As much as this is a cause for concern, I’m less worried about this in the long term for the following reason. As the author’s in the article stake, the code and models for AI and NLP are currently really inefficient; they don’t need to be neat and compute is relatively easy to come by (if you’re Google and Facebook). I think that the models will get more efficient, as is evident by the fact that new computer vision algorithms can get to the same outcomes with datasets that are orders of magnitude smaller than was previously possible.

For me though, the quote that I’ve pulled from the article to start this post is more compelling. If the costs of modeling NLP are so high, it seems likely that companies like Google, Facebook and Amazon will be the only ones who can do the high end research necessary to drive the field forward. Academics at universities have an incentive to create more efficient models, which they publish and which then allow companies to take advantage of those new models while at the same time, having access to much more computational resources.

From where I’m standing this makes it seem that private companies will always be at the forefront of AI development, which makes me less optimistic than if it were driven by academics. Maybe I’m just being naive (and probably also biased) but this seems less than ideal.

You can find the full paper here on arxiv.

Comment: ‘Robots’ Are Not ‘Coming for Your Job’—Management Is

…in practice, ‘the robots are coming for our jobs’ usually means something more like ‘a CEO wants to cut his operating budget by 15 percent and was just pitched on enterprise software that promises to do the work currently done by thirty employees in accounts payable.’

Merchant, B. (2019). ‘Robots’ Are Not ‘Coming for Your Job’—Management Is. Gizmodo.

It’s important to understand that “technological progress” is not an inexorable march towards an inevitable conclusion that we are somehow powerless to change. We – people – make decisions that influence where we’re going and to some extent, where we end up is evidence of what we value as a society.

Comment: Scientists teach computers fear—to make them better drivers

The scientists placed sensors on people’s fingers to record pulse amplitude while they were in a driving simulator, as a measure of arousal. An algorithm used those recordings to learn to predict an average person’s pulse amplitude at each moment on the course. It then used those “fear” signals as a guide while learning to drive through the virtual world: If a human would be scared here, it might muse, “I’m doing something wrong.”

Hutson, M. (2019). Scientists teach computers fear—to make them better drivers. Science magazine.

This makes intuitive sense; algorithms have no idea what humans fear, nor even what “fear” is. This project takes human flight-or-flight physiological data and uses it to train an autonomous driving algorithm to get a sense of what we feel when we face anxiety-producing situations. The system can use those fear signals to more quickly identify when they’re moving into dangerous territory, adjusting their behaviour to be less risky.

There are interesting potential use cases in healthcare; surgery, for example. When training algorithms on simulations or games, errors do not lead to high-stakes consequences. However, when trusting machines to make potentially life-threatening choices, we’d like them to be more circumspect and risk-averse. But one of the challenges is to get them to identify situations in which a human’s perception of risk is included in the decision-making process. Learning that cutting this artery will likely lead to death can be done by cutting that artery hundreds of times (in simulations) and noting the outcome. This gives us a process whereby the algorithm “senses” a fear response in a surgeon before cutting the artery, and possibly sending a signal indicating that they should slow down and call for help. This could help when deciding whether or not surgical machines should have greater autonomy when performing surgery because we could have mroe confidence that they’d ask for human intervention at appropriate times.

Summary: OECD Principles on AI

The Organisation for Economic Co-operation and Development (OECD) has just released a list of recommendations to promote the development of AI that is “innovative and trustworthy and that respects human rights and democratic values”. The principles are meant to complement existing OECD standards around security, risk management and business practices, and could be seen as a response to concerns around the potential for AI systems to undermine democracy.

The principles were developed by a panel consisting of more than 50 experts from 20 countries, as well as leaders from business, civil society, academic and scientific communities. It should be noted that these principles are not legally binding and should be thought of as suggestions that might influence the decision-making of the stakeholders involved in AI development i.e. all of us. The OECD recognises that:

  • AI has pervasive, far-reaching and global implications that are transforming societies, economic sectors and the world of work, and are likely to increasingly do so in the future;
  • AI has the potential to improve the welfare and well-being of people, to contribute to positive sustainable global economic activity, to increase innovation and productivity, and to help respond to key global challenges;
  • And that, at the same time, these transformations may have disparate effects within, and between societies and economies, notably regarding economic shifts, competition, transitions in the labour market, inequalities, and implications for democracy and human rights, privacy and data protection, and digital security;
  • And that trust is a key enabler of digital transformation; that, although the nature of future AI applications and their implications may be hard to foresee, the trustworthiness of AI systems is a key factor for the diffusion and adoption of AI; and that a well-informed whole-of-society public debate is necessary for capturing the beneficial potential of the technology [my emphasis], while limiting the risks associated with it;

The recommendations identify five complementary values-based principles for the responsible stewardship of trustworthy AI (while these principles are meant to be general, they’re clearly also appropriate in the more specific context of healthcare):

  • AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
  • AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
  • There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
  • AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
  • Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

The OECD also provides five recommendations to governments:

  • Facilitate public and private investment in research & development to spur innovation in trustworthy AI.
  • Foster accessible AI ecosystems with digital infrastructure and technologies and mechanisms to share data and knowledge.
  • Ensure a policy environment that will open the way to deployment of trustworthy AI systems.
  • Empower people with the skills for AI and support workers for a fair transition.
  • Co-operate across borders and sectors to progress on responsible stewardship of trustworthy AI.

For a more detailed description of the principles, as well as the background and plans for follow-up and monitoring processes, see the OECD Legal Instrument describing the recommendations.

Comment: For a Longer, Healthier Life, Share Your Data

There are a number of overlapping reasons it is difficult to build large health data sets that are representative of our population. One is that the data is spread out across thousands of doctors’ offices and hospitals, many of which use different electronic health record systems. It’s hard to extract records from these systems, and that’s not an accident: The companies don’t want to make it easy for their customers to move their data to a competing provider.

Miner, L. (2019). For a Longer, Healthier Life, Share Your Data. The New York Times.

The author goes on to talk about problems with HIPAA, which he suggests are the bigger obstacle to the large-scale data analysis that is necessary for machine learning. While I agree that HIPAA makes it difficult for companies to enable the sharing of health data while also complying with regulations, I don’t think it’s the main problem.

The requirements around HIPAA could change overnight through legislation. This will be challenging politically and legally but it’s not hard to see how it could happen. There are well-understood frameworks through which legal frameworks can be changed and even though it’s a difficult process, it’s not conceptually difficult to understand. But the ability to share data between EHRs will, I think, be a much bigger hurdle to overcome. There are incentives for the government to review the regulations around patient data in order to push AI in healthcare initiatives; I can’t think of many incentives for companies to make it easier to port patient data between platforms. Unless companies responsible for storing patient data make data portability and exchange a priority, I think it’s going to be very difficult to create large patient data sets.

Comment: DeepMind Can Now Beat Us at Multiplayer Games, Too

DeepMind’s agents are not really collaborating, said Mark Riedl, a professor at Georgia Tech College of Computing who specializes in artificial intelligence. They are merely responding to what is happening in the game, rather than trading messages with one another, as human players do…Although the result looks like collaboration, the agents achieve it because, individually, they so completely understand what is happening in the game.

Metz, C. (2019). DeepMind Can Now Beat Us at Multiplayer Games, Too. New York Times.

The problem with arguments like this is that 1) we end up playing semantic games about what words mean, 2) what we call the computer’s achievement isn’t relevant, and 3) just because the algorithmic solution doesn’t look the same as a human solution doesn’t make it less effective.

The concern around the first point is that, as algorithms become more adept at solving complex problems, we end up painting ourselves into smaller and smaller corners, hemmed in by how we defined the characteristics necessary to solve those problems. In this case, we can define collaboration in a way that means that algorithms aren’t really collaborating but tomorrow when they can collaborate according to today’s definition, we’ll see people wanting to change the definition again.

The second point relates to competence. Algorithms are designed to be competent at solving complex problems, not to solve them in ways that align with our definitions of what words mean. In other words, DeepMind doesn’t care how the algorithm solves the problem, only that it does. Think about developing a treatment for cancer…will we care that the algorithm didn’t work closely with all stakeholders, as human teams would have to, or will it only matter that we have an effective treatment? In the context of solving complex problems, we care about competence.

And finally, why would it matter that algorithmic solutions don’t look the same as human solutions? In this case, human game-players have to communicate in order to work together because it’s impossible for them to do the computation necessary to “completely understand what is happening in the game”. If we had the ability to do that computation, we’d also drop “communication” requirement because it would only slow us down and add nothing to our ability to solve the problem.