The act of summarising is not neutral. It involves decisions and choices that feed into the formation of knowledge and understanding. If we are to believe some of the promises of AI, then tools like Paper Digest (and the others that will follow) might make our research quicker and more efficient, but we might want to consider if it will create blindspots.Beer, D. (2019). Should we use AI to make us quicker and more efficient researchers? LSE Impact blog.
I have some sympathy for the argument that, as publication in our respective fields increases in volume and speed, it will become impossible to stay on top of what’s current. I’m also fairly confident that AI-generated research summaries are going to get to the point where you’ll be able to sign up for a weekly digest that includes only the most important and relevant articles for my increasingly narrow area of interest. Obviously, “important” and “relevant” are terms that contain implicit assumptions about who I am and what I’m interested in.
Where I differ from the author of the post I’ve linked to is that I don’t see anyone mistaking the summary of the research for the research itself. No-one is going to read the weekly digest and think that they’ve done the work of engaging with the details. You’ll get a 10 minute narrative overview of recent work published in your area, note the 3-5 articles that grab your attention, read those abstracts and then maybe get to grips with 1 or 2 of them. Of course, there are concerns with this:
- Who is deciding what is included in the summary overview? Ideally, it should be you and not Elsevier, for example.
- How long will it be before you really can trust that the summary is accurate? But, you also have no way of trusting summaries written by people, other than by doing the work and reading the original.
- Whatever doesn’t show up in this feed may be ignored. But you can – and should – have multiple sources of information.
However, the benefit of AI is that it will take what is essentially a firehose of research findings and limit it to something you can make sense of and potentially do something with. At the moment I mainly rely on people I trust (i.e. those who I follow on Twitter, for example) to share the research they think is important. In addition to the value of having a human-curated feed there’s also a serendipity to finding articles this way. However, none of those people are sharing things specifically for me, so even then it’s hit and miss. I think an AI-based system will be better for separating the signal from the noise.
Note: I tried to use the service for 3 of my own open access articles and all 3 times it returned the same result, which wasn’t a summary of any of what I had submitted. So, definitely still in beta.