Thanks to Ben Gordon for pointing me towards explainpaper.
In How to read a book (1972), Mortimer Adler says that “Reading…should be a conversation between you and the author.” Which is why I don’t read without a figurative pen in my hand; As I’m reading I want to mark up the text with questions and comments aimed at the author.
Well, I’ve just finished having a conversation with an article I wrote a few years ago. A literal conversation. Where the text has responded to the questions I posed to it. And I was completely blown away.
The system is available at explainpaper, and it works like this.
- Upload an article as a PDF.
- Highlight some text in the article.
- explainpaper ‘explains’ the text you’ve highlighted, using simple language that strips out the jargon (text below the yellow text in the image).
- You have opportunities to ask follow-up questions based on the response that explainpaper has provided.

Now, I haven’t spent a lot of time with the site, so it’s entirely possible that my very positive experience is the result of having only played with it for a short while. I also chose an article on a topic that the language model in explainpaper is probably biased towards. And the language model that explainpaper is based on still returns gibberish often enough that you shouldn’t simply accept what it’s telling you.
Nevertheless, this is an incredible tool that’s only going to get better. In the section below I’ve included all the conversation that I had in my test i.e. I haven’t edited the excerpts, or only chosen examples that worked well. This is exactly the conversation I had.
I took an article I published with co-authors from a couple of years ago and uploaded it to explainpaper. The article was Rowe, M., Nicholls, D. A., & Shaw, J. (2021). How to replace a physiotherapist: Artificial intelligence and the redistribution of expertise. Physiotherapy Theory and Practice, 0(0), 1–9.
From the article:
In this section, we describe six AI-based systems that have been introduced into clinical contexts that are clearly related to some of the tasks performed by physiotherapists. Our aim is to demonstrate how AI is beginning to encroach on some areas of physiotherapy practice that are not obviously amenable to automation. We have focused on six examples: video analysis, natural language processing, dexterous robotics, virtual assistants, expert systems, and prediction algorithms. Although not a comprehensive survey of applications of AI in physiotherapy, we believe these are salient use cases of AI that are worthy of special attention by the profession. The introduction of AI-based systems into health systems is likely to proceed in different stages, with some taking longer and others relatively soon. It is not our intention to suggest that these changes are imminent, universal, or cheap. Instead, we offer them as stimuli to provoke discussion on the future of the profession. In the following section, we discuss how various subdomains of AI map onto specific tasks that many physiotherapists would recognize as being central to their practice.
explainpaper’s (EP) initial summary: In this section, the author describes six examples of how AI is beginning to be used in clinical contexts related to tasks performed by physiotherapists. The examples are video analysis, natural language processing, dexterous robotics, virtual assistants, expert systems, and prediction algorithms. The author’s aim is to demonstrate how AI is beginning to encroach on some areas of physiotherapy practice that are not obviously amenable to automation.
Michael Rowe (MR): What are prediction algorithms?
EP: Prediction algorithms are computer programs that use data to make predictions about future events.
MR: Are there any areas of clinical practice that the author has neglected?
EP: The author has only focused on six examples of how AI is being used in clinical contexts related to tasks performed by physiotherapists.
MR: Can you give me more examples?
EP: Some other examples of how AI is being used in healthcare include diagnosis, treatment planning, and disease management.
MR: How is AI relevant for diagnosis?
EP: AI can be used to create models that can identify patterns in data that may be indicative of a particular disease. These models can then be used to make predictions about whether or not a patient has the disease.
Another example of text highlighted in the same paper:
The development of computers that can use natural language processing (NLP) to imitate human interaction means that soon it may not be possible to know whether one is speaking to a person or a machine. In the clinical context, NLP is being used to generate structured, annotated text from unstructured clinical records that enable the automation of clinical workflows at volumes and speeds that human beings are incapable of achieving (Cai et al., 2016; Jackson et al., 2017). In addition, NLP may soon be used to transcribe and summarize conversations between clinicians and patients that will then only need to be approved by the care provider (Kreimeyer et al., 2017). The recording of clinician- patient conversations will also enable the conversion of unstructured human conversation into semantic data, allowing the automated capture of symptoms, medications, and other clinically relevant information directly into the EHR, lowering the risk of data capturing errors and reducing clinician burnout (Kroth et al., 2019). This will also facilitate new forms of clinical research by identifying patterns and relationships across large data- bases of patient records, which are too numerous and complex for human beings to recognize. As conversational user interfaces become ubiquitous in our devices and virtual personal assistants (VPAs) it will become increasingly difficult to know when our interactions are being mediated by software (Levy, 2016), opening up a wide range of ethical and legal concerns that are nowhere more pressing than in healthcare systems.
EP’s initial summary: The text explains how natural language processing (NLP) is being used to generate structured, annotated text from unstructured clinical records. In addition, NLP may soon be used to transcribe and summarize conversations between clinicians and patients. This will enable the automated capture of symptoms, medications, and other clinically relevant information directly into the EHR, lowering the risk of data capturing errors and reducing clinician burnout.
MR: What is the state of the art in NLP?
EP: The state of the art in NLP is that it is being used to generate structured, annotated text from unstructured clinical records. In addition, NLP may soon be used to transcribe and summarize conversations between clinicians and patients.
MR: Why would it be useful to convert unstructured conversation into structured data in the clinical context?
EP: There are several reasons why it would be useful to convert unstructured conversation into structured data in the clinical context. First, it would enable the automated capture of symptoms, medications, and other clinically relevant information directly into the EHR. This would lower the risk of data capturing errors and reduce clinician burnout. Second, it would facilitate new forms of clinical research by identifying patterns and relationships across large databases of patient records.
MR: What will be left for the clinician to do?
EP: The clinician will still be responsible for approving the transcriptions and summaries generated by the NLP software.
A longer conversation that veers away from the actual text in the paper:
We are also seeing significant changes happening in the development of expert systems that are capable of storing, accessing, and analyzing everything we know about the pathologies related to human illness. These systems aim to provide clinical decision support in order to enhance decision-making (Shortliffe and Sepulveda, 2018) by removing common sources of human error in the clinical reasoning process. While some designers of expert systems seem to take seriously the concerns about patient privacy in an age of increasingly connected health systems (Attema et al., 2018; Castaneda et al., 2015) there are real ethical and legal implications that need consideration and which are dealt with in more detail later in the paper. As the ability to recall and reason moves into software systems we are likely to see a reduced emphasis on the clinician as a diagnostician. Knowing things about conditions will, therefore, become less important than knowing when to trust the outputs of clinical decision support systems
EP’s initial summary: The text is discussing the development of expert systems that are designed to help with decision making in healthcare. The systems aim to remove sources of human error, but there are ethical concerns that need to be considered. The text suggests that as these systems become more prevalent, the importance of clinicians knowing about conditions will become less important than trusting the system’s outputs.
MR: What are expert systems?
EP: Expert systems are computer programs that aim to replicate the decision-making abilities of human experts. They typically do this by storing large amounts of data and using it to generate recommendations or predictions.
MR: Can we use information about the past to make predictions about the future?
EP: Yes, expert systems often use data about past events to make predictions about future events. This can be helpful in many different fields, including healthcare, where accurate predictions can lead to better decision-making.
MR: What if the patient has a condition that doesn’t appear in the historical data?
EP: If a patient has a condition that doesn’t appear in the historical data, the expert system may not be able to make an accurate prediction. In such cases, it’s important to have a human expert involved in the decision-making process.
MR: Can we trust expert systems to hand off to human experts when it is uncertain about the patient’s condition?
EP: It depends on the expert system. Some expert systems are designed to hand off to human experts when they are uncertain about a patient’s condition, while others are not. It’s important to know how an expert system is designed before trusting it to make decisions about healthcare.
MR: Should clinicians be involved in the design of expert systems that will be used in healthcare?
EP: Yes, clinicians should be involved in the design of expert systems that will be used in healthcare. This is because they have the expertise and knowledge needed to ensure that the system is designed in a way that will be safe and effective for patients.
A pretty impressive summary of the conclusion of this article:
Machine learning algorithms are already “smarter” than human beings within certain narrow domains of clinical practice and will increasingly take over some of the cognitive and physical tasks that were previously the sole domain of human clinicians. Instead of more sophisticated manual therapy skills, or better diagnostic abilities, we may find that successful clinical practice in the 21st century will rely more on the analysis and interpretation of ML algorithms, as well as the application of judgment in order to integrate the ethical, professional and social context of patients. Unless physiotherapists are actively engaged in a conversation around AI in clinical practice we run the risk that our clinical decision-making will be subject to machine intelligence, rather than informed by it. The challenge we face as a profession is to bring together machines and humans in ways that enhance human well-being, augment human ability, and expand human capacity. While it may be true that clinicians will be happy to see administrative work handed off to intelligent machines (e.g. data capture) what about the work that we consider to be core to the profession? What is left of “physiotherapy” when the discipline-specific tasks of the profession are automated? If, as we suggest, physiotherapists should move toward the more relational aspects of practice and reduce the current emphasis on specialized knowledge and expertise, it seems clear that the profession would no longer look like “physiotherapy”. We may subsequently find it difficult to distinguish ourselves from other healthcare professions who have moved in similar directions. Should these changes play out in the way that we describe here, would the profession remain fit for purpose? Would we still be physiotherapists?
EP: The text explains that machine learning algorithms are becoming smarter than human beings in certain areas, and that this trend will continue. This could have a big impact on the field of physiotherapy, since many of the tasks that physiotherapists currently do could be automated. This could lead to a situation where physiotherapists are no longer needed, or at least not needed in the same capacity.