Tag Archives: publication

Why shouldn’t journals publish translations of articles alongside the English version?

Update (14 April 2022): If you’re interested in the notion that something is lost when we default to English as the language of scientific communication, you may be interested in this reflective podcast by Shaun Cleaver that was prepared as part of the 2020 In beta unconference.

A few days ago I received a submission to OpenPhysio from someone who was clearly a non-English first language speaker. After a few rounds of email to make sure I understood the general structure and claims of the article, I decided that we’d go ahead and work together to tidy it up a bit, before sending it out for peer review. I know that reviewers can sometimes take on an editorial role as part of the process and wanted to make sure that the central ideas were clear.

However, it occurred to me that this may also be an opportunity to offer the author the option of preparing a translation of the article in their home language, to be published alongside the ‘original’ i.e. the English version. Authors go to a lot of effort to translate their work into English, which has this weird side-effect of closing it off to a population of non-English speakers, who may nonetheless have benefitted from reading it. I can only see upsides to this practice and almost no disadvantages, other than it adding a bit more work to the publishing process. And of course, authors would have to agree to take on the translation themselves (I’m talking from the context of a fee-free journal, like OpenPhysio, that wouldn’t be able to pay for this service).

There are no technical limitations that would prevent this. Making a second version of the article available is as simple as providing a link to the file. To start with, we could even say that the translation will be available as a ‘stripped back’ version, with no formatting and design i.e. it could simply be a PDF with the the original citation that points back to the canonical (English) version. Of course, the author can do this anyway but I think that making it available alongside the original would add some ‘credibility’ to the translation. This first iteration would just be a proof of concept. You can imagine that, over time, you could have it available in HTML (to help with discoverability), and also assign a DOI to the translated version to differentiate it from the canonical version. And you’d need to have a translator verify that the articles are the same.

I can’t think of any reasons for why we shouldn’t do this.

Resource: Internet Archive Scholar

https://scholar.archive.org/

This fulltext search index includes over 25 million research articles and other scholarly documents preserved in the Internet Archive. The collection spans from digitized copies of eighteenth century journals through the latest Open Access conference proceedings and pre-prints crawled from the World Wide Web.

I’m a big fan of the work being done by the Internet Archive, so I was especially interested to read about a new project they’ve initiated: Internet Archive Scholar (although I’m less excited about their logo, which looks like something designed in Microsoft Word in the 90s). The database is a collection of documents retrieved from:

  • public web content as preserved in The Wayback Machine and Archive-It partner collections
  • digitized print materials from paper and microform collections
  • general materials from archive.org collections, including collaborations with partners

Read more about IAS here.

OMW, Fermat’s Library looks amazing

Fermat’s Library is a service that allows members to upload papers and annotate them to provide some of the context around research articles, through annotation and discussion. The website creators talk about the importance of understanding the backstory to a lot of academic research.

For example, in the image below you can see a summary of Richard Feynman’s Value of Science paper, along with points worth highlighting in the text. You can respond to comments left by others as part of a longer discussion, if you’d like.

Click on image to embiggen.

I haven’t spent much time browsing papers yet but it feels like the emergent emphasis is on older articles that are more philosophical in nature. I say ’emergent’ because don’t appear to be any top-down conditions dictating what to upload and yet most articles I noticed were older, and ‘philosophical’ because the ones that stood out to me are the ones that ask questions rather than try to provide answers.

I learned about Fermat’s Library from the Lex Fridman podcast (which may be my new favourite podcast, by the way). The episode on Fermat’s Library talks about the platform itself, as well as the reason it exists, and problems with scientific publication in its current form, including the challenges of determining research impact.

This looks like a brilliant service and I’m excited to spend more time browsing papers on the site.

On the poor performance of AI models during the pandemic

Heaven, W.D. (2021). Hundreds of AI tools have been built to catch covid. None of them helped. MIT Technology Review.

In the end, many hundreds of predictive tools were developed. None of them made a real difference, and some were potentially harmful. That’s the damning conclusion of multiple studies published in the last few months. In June, the Turing Institute, the UK’s national center for data science and AI, put out a report summing up discussions at a series of workshops it held in late 2020. The clear consensus was that AI tools had made little, if any, impact in the fight against covid.

The article links to a few good overview studies and reports that provide more detail about the methodological flaws in the studies referred to in the title. There’s also this excellent thread by Cory Doctorow on the garbage in, garbage out problem that we find with many ML studies in general.

First of all, it’s obviously great news that we’re identifying the areas that ML falls short of expectations. We cannot be in situations where all claims are simply accepted because they align with our hopes and beliefs. This is why we publish our methods; we want others to find the mistakes in our work. This is what progress is.

But I’ll also add that we don’t need AI to make false claims based on poor evidence; there’s good evidence that most of the research we publish isn’t worth paying attention to anyway. See Ioannidis, J.P.A. (2005). Why Most Published Research Findings Are False. PLoS Medicine, 2(8), 6, for example. So there’s nothing special about using ML or any other technique to publish poor research; people have been doing that for a lot longer than we’ve been using machine learning.

And finally, it’s worth pointing out that all of the studies referenced in the MIT Technology article would have been conducted in the very earliest stages of the pandemic, with researchers trying to accelerate progress in an effort to limit the effects of a global virus outbreak. Journals relaxed publication criteria in efforts to share information as soon as possible, and the incentive structure around early publication of potentially groundbreaking research doesn’t exactly encourage slow and considered reflection. I’m not arguing that any of this is the way it should be, only that the problem is more complicated than simply highlighting the flaws in early papers. The response to Covid included a massive spike in related publications, and many of them would’ve used data that had been gathered quickly, was poorly analysed, and published in a hurry. No-one is doing the analyses to show how little those articles have contributed to solving the problem.

From the excerpt above: “None of them made a real difference, and some were potentially harmful.” This is true. But we can say the same thing about medical practice for almost all of human history. The methods of medicine before the 20th century caused an enormous amount of pain and suffering. And yet we still have doctors. They just had to up their game.

We need more research taking critical positions against the development of clinical interventions, whether or not those interventions include AI. That’s just good science. But we also need to make sure that we don’t demonise a technology that’s being implemented poorly by people, in the same way that we don’t demonise cars when they’re involved in pileups. I feel like this is a point that I’m going to keep having to make; there’s a ridiculous double-standard that exists when we evaluate the performance of AI while ignoring all the ways that people stuff things up.

These are early days for clinical and health AI and there’s a lot that doesn’t work. Scientific progress is nothing if not a collection of stories about we’ve failed. But every now and again, something works. And it only works because we’ve spent ages figuring out all the ways it doesn’t.

white and blue abstract painting

Weekly digest (14-18 June 2021)

This digest has an AI and machine learning focus because I’m preparing a presentation for the SAAHE conference next week, and my topic is Clinicians’ perceptions of the introduction of AI into clinical practice. It’s from an international survey I completed in 2019, mostly forgot about in 2020 (because, Covid) and am finally trying to wrap up now. Anyway, my reading and thinking has been focused on this for the last week or so.


Heidelberg. (2021, May 4). Springer Nature advances its machine-generated tools and offers a new book format with AI-based literature overviews. Springer Nature Group.

It was very exciting to be part of such an innovative experiment. It enabled me to discover interesting aspects I had previously neglected, stimulating me to find out additional citations and references. The AI was able to find such connections producing a wealth of data which are summarized in the chapters of the book.

I can’t say anything about the quality of the book only that it’s interesting to note that it’s possible to us an algorithm to create a literature review. And considering how difficult it is to do a good literature review (most are not very good), I’m fairly confident that algorithms will soon reach a point where they’re producing reviews of the literature that are at least as good as those produced by us.


Greene, T. (2020, April 14). Google’s AutoML Zero lets the machines create algorithms to avoid human bias. The Next Web | Neural.

Machines making their own algorithms, just like nature intended.

Perhaps the most interesting byproduct of Google‘s quest to completely automate the act of generating algorithms and neural networks is the removal of human bias from our AI systems. Without us there to determine what the best starting point for development is, the machines are free to find things we’d never think of.

I’ve always thought it’s unfair to talk about machine learning bias as if it’s the fault of the algorithm. The algorithm is trained on data generated by human beings, and it’s our bias that’s reflected in the outcomes. Human beings make choices about what data to collect, how to collect it, how to label it, how to design the training process, what algorithms to train, what outcomes are valued, and so on. We also built the cultural, social, legal, ethical and commercial norms within which we generate the data in the first place. So it’s human beings who are biased and whose bias influences algorithmic outcomes. But no-one seems to be interested in trying to reduce the influence of human bias in our own decision-making, which is sub-optimal across the board. I’ve always thought that the best way to reduce bias in decision-making is to remove the human so it’s nice to see things like AutoML starting to do just that. At some point we should acknowledge that, in many scenarios, all we’re doing is adding noise.

See also Real, E., Liang, C., So, D. R., & Le, Q. V. (2020). AutoML-Zero: Evolving Machine Learning Algorithms From Scratch. ArXiv:2003.03384 [Cs, Stat]. http://arxiv.org/abs/2003.03384


Kahng, A. B. (2021). AI system outperforms humans in designing floorplans for microchips. Nature, 594(7862), 183–185.

Modern chips are a miracle of technology and economics, with billions of transistors laid out and interconnected on a piece of silicon the size of a fingernail. Each chip can contain tens of millions of logic gates, called standard cells, along with thousands of memory blocks, known as macro blocks, or macros. The cells and macro blocks are interconnected by tens of kilometres of wiring to achieve the designed functionality.

Mirhoseini et al. estimate that the number of possible configurations (the state space) of macro blocks in the floorplanning problems solved in their study is about 102,500. By comparison, the state space of the black and white stones used in the board game Go is just 10360.

First of all, this kind of complexity is just insane. I knew that chip design was complicated but I didn’t really have a good idea of the scales involved.

One of the problems in chip design is the painstaking process of adding the macro blocks to the chip floorplan. You end up placing blocks that later have to be moved because of how you’re laying them out. Design-choices made in the beginning influence what constraints you have to work with later, and changes to later placements have a knock on effect that mean having to move earlier blocks. But that’s not what happens with the algorithm, which seems as if it’s looking into the future and predicting what blocks will need to go where, which enables it to place blocks now that won’t need to be adjusted later. This kind of prediction and management of complexity is an example of something that we – humans – simply can’t conceive of doing without augmentation.

What’s possibly even more interesting is that the researchers approached the problem of block placement on chip floorplan as if it was a board game like Go. If you think about it, placing blocks onto a bounded space in optimal configurations that lead to specific outcomes that are quantitatively superior to other placements, is pretty much what games like chess and Go consist of. While I don’t think that this counts as transfer learning, it’s definitely an interesting example of analogy, where the algorithm is being used in one context that is analogous to another. This feels like something important.


Koenig, R. (2021). Why Education Is a ‘Wicked Problem’ for Learning Engineers to Solve. EdSurge News.

We have not yet started thinking about how humans will react to those machines. And what do we need to teach humans about those machines so that the human-machine collaboration is an effective one?

This simply isn’t true. We’ve been thinking about the problem of interacting with machines for a very long time. It’s called science fiction and we have many different lines of inquiry as to how this might play out. From movies, to books, to blog posts, to tweets, we have thousands of people who spend a lot of time thinking carefully about how we might react to intelligent machines. This comment is just a lack of imagination.

Call for papers – Towards a new normal in physiotherapy education

By responding to this global disruption, we are placed in a situation where we are having to rethink our approaches to physiotherapy education. All over the world physiotherapy educators are engaged in what is possibly the most extensive programme of pedagogical change in our professional history. We see colleagues responding with creativity, empathy and flexibility, creating a unique opportunity for us to capture and share what may be a series of transformative changes in physiotherapy education at a global scale.

I’m excited to announce that OpenPhysio has put out a call for papers aimed at learning how colleagues from around the world are responding to the changes they’re currently experiencing within their professional programmes. We’re interested in the changes currently underway that have the potential to transform physiotherapy education, both in the short- and long-terms.

Submissions should be short (1500-2000 words) research reports or notes with a clear problem, a maximum of 3-5 citations, early findings (even if only in the form of observations), and provide a single focused recommendation.

You can find out more about the call on the OpenPhysio website.

#APAperADay: Twelve tips for getting your manuscript published.

Cook, D. A. (2016). Twelve tips for getting your manuscript published. Medical Teacher, 38(1), 41–50.


I went through this article to present it for discussion at our departmental journal club meeting last week. It’s a useful review paper for anyone interested in academic publishing, especially novice authors who may not have much experience preparing manuscripts for submission.


Getting the manuscript ready

1. Plan early to get it out the door. Write regularly – even if it’s for shorter periods – because it’s hard to find large blocks of time, which means that you don’t write very often. Set clear, concrete goals because otherwise you end up doing lots of reading and editing but don’t put words on the page. Refine in stages, perhaps initially using a rough outline where the argument can be presented and seen all at once, before expanding points into sentences, then paragraphs, and finally into sections.

2. Address authorship and writing group expectations up front. Deciding the order and contributions of each author is important to do early on in the process. See the ICMJE guidelines on defining the role of authors. The main point to take away is that, in order to be listed as an author, an intellectual contribution to the paper (which is different to the project) is necessary.

3. Maintain control of the writing. There needs to be one person who drives the process and ensures that editing of the manuscript is controlled. The author suggests having one master document that only they have access to, with other authors submitting changes on separate documents. This might be less important with the version control and change tracking that’s built into current collaborative writing platforms e.g. Google Docs.

4. Ensure complete reporting. Find out what reporting guidelines exist for your specific type of study design e.g. SR, RCT, qualitative research, etc. Note that the title can be thought of as part of the reporting something to the reader; t’s the one thing that every reader will actually read. The introduction provides context, a conceptual framework, literature review, problem statement and then the question or aim. The Discussion should be focused and informative, leaving out what is not really necessary. It might follow this structure: summary, limitations, integration with prior work, implications for practice or research.

5. Use electronic reference management software. You can do this manually but, after the initial setup of your resource library, using management software is far more efficient. There are two additional reasons to use software; citations can be reformatted into different styles, new citations can be inserted without having to renumber everything else. Don’t capture sources into your library by hand as this can introduce errors; use the software to import from PubMed and journals directly. Mendeley is popular, as is Endnote. I use Zotero, which is an excellent open source programme.

6. Polish carefully before you submit. Make sure that there are as few spelling, grammatical, typographical, punctuation and style errors as possible in the manuscript before you submit. It’s important to be consistent in your editing across all of the above e.g. UK vs US English, different heading styles for first level headings, inconsistent citation formatting, etc. will all suggest to the Editor that you’re not paying attention to the small things.

7. Select the right journal. Who will be reading the journal? There’s no point aiming for a high impact journal if their audience won’t be interested in your work. Review the journal aim and scope, instructions for authors, or even contact the Editor and ask if they think that your topic and question would be of interest to the journal’s audience. Try to evaluate your own work objectively, possibly by comparing it to a few papers from the journal you’re aiming for, and ask if it would fit alongside those articles. All metrics used to evaluate the “quality” of a journal are flawed.

8. Follow journal instructions precisely. Editors may desk reject (i.e. not even send out for review) articles where authors have disregarded the instructions. There are often a variety of other items that need to accompany the article e.g. cover letter (topic, aim, implications), disclosures, conflict of interest statements, authorship, possible reviewers, funding, and ethics clearance. It can take surprisingly long to gather this additional information.

When you are rejected (and you will be rejected)

9. Get it back out the door quickly. There’s no value in delaying it because your feelings are hurt. Try to remember that everyone gets rejected. It may be helpful to have a list of other journals you will submit to if the article is rejected. It is not helpful to argue with the Editor.

10. Take seriously all reviewer and editor suggestions. Even though you are obviously not required to use the feedback, you should at least pay it some attention. The author suggests a rubric for deciding what comments to pay attention to: essential, high-yield, easy and useful, other.

When you are invited to revise and resubmit

It’s unlikely that you will ever have an article accepted without having to make any changes.

11. Respond carefully to every suggestion, even if you disagree. I agree with the first part of this, “respond carefully”. However, the second part seems to suggest that you should make the suggested changes, even if you disagree. The author even says that the “reviewers are always right”. I disagree and will almost always stand my ground on points that I feel don’t need to be changed. I’ll sometimes spend 2-3 times longer arguing for why the change shouldn’t be made, than it would’ve taken to just edit the text. However, I will clarify the writing to ensure that other readers don’t make the same mistake that the reviewer made. You do need to respond to every comment though, ensuring that you’re respectful in your responses. Whatever you think of the actual feedback, someone has taken the time to read and comment on your work. Make sure that you follow the journal instructions for how to edit and resubmit your article.

12. Get input from others as you revise. It’s especially useful to have someone else go over your response to the reviewers. It may also be useful to contact the Editor directly; they have asked you to resubmit so obviously think that your work has merit.

9 (revisited). Get it back out the door quickly. When asked to resubmit, unless the reviewers are suggesting major changes, it might be worthwhile dropping everything else and focusing on making the changes.

There is a little more than a page devoted entirely to a series of tips for effective tables and figures (pg. 5-6).

Table 3 (pg. 8-9) includes examples of different kinds of reviewer comments, with appropriate responses.


Note: I’m the Editor at OpenPhysio, an open-access, peer-reviewed online journal with a focus on physiotherapy education. If you’re doing interesting work in the classroom, even if you have no experience in publishing educational research, we’d like to help you share your stories.

Article: Predatory journals: No definition, no defense.

Everyone agrees that predatory publishers sow confusion, promote shoddy scholarship and waste resources. What is needed is consensus on a definition of predatory journals. This would provide a reference point for research into their prevalence and influence, and would help in crafting coherent interventions.

Grudniewicz, A. (2019). Predatory journals: no definition, no defence. Nature, 576, 210-212, doi: 10.1038/d41586-019-03759-y.
There exist a variety of checklists to determine if a journal is widely recognised as being “predatory” but the challenge is that few lists are consistent and some are overlapping, which is not helpful for authors.

The consensus definition reached by the authors of the paper:

Predatory journals and publishers are entities that prioritize self-interest at the expense of scholarship and are characterized by false or misleading information, deviation from best editorial and publication practices, a lack of transparency, and/or the use of aggressive and indiscriminate solicitation practices.

Further details of the main concepts in the definition are included in the article.


Note: Some parts of this article were cross-posted at OpenPhysio, an open-access, peer-reviewed online journal with a focus on physiotherapy education. If you’re doing interesting work in the classroom, even if you have no experience in publishing educational research, we’d like to help you share your stories.

Resource: The Scholarly Kitchen podcast.

The Society for Scholarly Publishing (SSP) is a “nonprofit organization formed to promote and advance communication among all sectors of the scholarly publication community through networking, information dissemination, and facilitation of new developments in the field.” I’m mainly familiar with SSP because I follow their Scholarly Kitchen blog series and only recently came across the podcast series throught the 2 episodes on Early career development (part 1, part 2). You can listen on the web at the links or subscribe in any podcast client by searching for “Scholarly Kitchen”.


Note: I’m the editor and founder of OpenPhysio, an open-access, peer-reviewed online journal with a focus on physiotherapy education. If you’re doing interesting work in the classroom, even if you have no experience in publishing educational research, we’d like to help you share your stories.

Article: Which are the tools available for scholars?

In this study, we explored the availability and characteristics of the assisting tools for the peer-reviewing process. The aim was to provide a more comprehensive understanding of the tools available at this time, and to hint at new trends for further developments…. Considering these categories and their defining traits, a curated list of 220 software tools was completed using a crowdfunded database to identify relevant programs and ongoing trends and perspectives of tools developed and used by scholars.

Israel Martínez-López, J., Barrón-González, S. & Martínez López, A. (2019). Which Are the Tools Available for Scholars? A Review of Assisting Software for Authors during Peer Reviewing Process. Publications, 7(3): 59.

The development of a manuscript is inherently a multi-disciplinary activity that requires a thorough examination and preparation of a specialized document.

This article provides a nice overview of the software tools and services that are available for authors, from the early stages of the writing process, all the way through to dissemination of your research more broadly. Along the way the authors also highlight some of the challenges and concerns with the publication process, including issues around peer review and bias.

This classification of the services is divided into the following nine categories:

  1. Identification and social media: Researcher identity and community building within areas of practice.
  2. Academic search engines: Literature searching, open access, organisation of sources.
  3. Journal-abstract matchmakers: Choosing a journal based on links between their scope and the article you’re writing.
  4. Collaborative text editors: Writing with others and enhancing the writing experience by exploring different ways to think about writing.
  5. Data visualization and analysis tools: Matching data visualisation to purpose, and alternatives to the “2 tables, 1 figure” limitations of print publication.
  6. Reference management: Features beyond simply keeping track of PDFs and folders; export, conversion between citation styles, cross-platform options, collaborating on citation.
  7. Proofreading and plagiarism detection: Increasingly sophisticated writing assistants that identify issues with writing and suggest alternatives.
  8. Data archiving: Persistent digital datasets, metadata, discoverability, DOIs, archival services.
  9. Scientometrics and Altmetrics: Alternatives to citation and impact factor as means of evaluating influence and reach.

There’s an enormous amount of information packed into this article and I found myself with loads of tabs open as I explored different platforms and services. I spend a lot of time thinking about writing, workflow and compatability, and this paper gave me even more to think about. If you’re fine with Word and don’t really get why anyone would need anything else, you probably don’t need to read this paper. But if you’re like me and get irritated because Word doesn’t have a “distraction free mode”, you may find yourself spending a couple of hours exploring options you didn’t know existed.


Note: I’m the editor and founder of OpenPhysio, an open-access, peer-reviewed online journal with a focus on physiotherapy education. If you’re doing interesting work in the classroom, even if you have no experience in publishing educational research, we’d like to help you share your stories.