Categories
reading research

#APaperADay – Conceptual frameworks to illuminate and magnify

Bordage, G. (2009). Conceptual frameworks to illuminate and magnify. Medical Education, 43(4), 312–319. https://doi.org/10.1111/j.1365-2923.2009.03295.x

Conceptual frameworks represent ways of thinking about a problem or a study, or ways of representing how complex things work the way they do.


A nice position paper that emphasises the value of conceptual frameworks as a tool for thinking, not only more deeply about problems, but more broadly, through the use of multiple frameworks applied to different aspects of the problem. The author uses three examples to develop a set of 13 key points related to the use of conceptual frameworks in education and research. The article is useful for anyone interested in developing a deeper approach to project design and educational research.

Frameworks inform the way we think and the decisions we make. The same task – viewed through different frameworks – will likely have different ways of thinking associated with it.

Frameworks come from:

  • Theories that have been confirmed experimentally;
  • Models derived from theories or observations;
  • Evidence-based practices.

We can combine frameworks in order for our activities to be more holistic. Educational problems can be framed with multiple frameworks, each providing different points of view and leading to different conclusions/solutions.

Like a lighthouse that illuminates only certain sections of the complete field of view, conceptual frameworks also provide only partial views of reality. In other words, there is no “correct” or all-encompassing framework for any given problem. Using a framework only enables us to illuminate and magnify one aspect of a problem, necessarily leaving others in the dark. When we start working on a problem without identifying our frameworks and assumptions (can also be thought of as identifying our biases) we limit the range of possible solutions.

Authors of medical education studies tend not explicitly identify their biases and frameworks.

The author goes on to provide three examples of how conceptual frameworks can be used to frame various educational problems (2 in medical education projects, 1 in research). Each example is followed by key points (13 in total). In each of the examples, the author describes possible pathways through the problem in order to develop different solutions, each informed by different frameworks.

Key points (these points make more sense after working through the examples):

  1. Frameworks can help us to differentiate problems from symptoms by looking at the problem from broader, more comprehensive perspectives. They help us to understand the problem more deeply.
  2. Having an awareness of a variety of a conceptual frameworks makes it more likely that our possible solutions will be wide-ranging because the frameworks emphasise different aspects of the problem and potential solution.
  3. Because each framework is inherently limited, a variety of frameworks can provide more ways to identify the important variables and their interactions/relationships. It is likely that more than one framework is relevant to the situation.
  4. We can use different frameworks within the same problem to analyse different aspects of the problem e.g. one for the problem and one for the solution.
  5. Conceptual frameworks can come from theories, models or evidence-based practices.
  6. Scholars need to apply the principles outlined in the conceptual framework(s) selected.
  7. Conceptual frameworks help identify important variables and their potential relationships; this also means that some variables are disregarded.
  8. Conceptual frameworks are dynamic entities and benefit from being challenged and altered as needed.
  9. Conceptual frameworks allow scholars to build upon one another’s work and allow individuals to develop programmes of research. When researchers don’t use frameworks, there’s an increased chance that the “findings may be superficial and non-cumulative.”
  10. Programmatic, conceptually-based research helps accumulate deeper understanding over time and thus moves the field forward.
  11. Relevant conceptual frameworks can be found outside one’s specialty or field. Medical education scholars shouldn’t expect that all relevant frameworks can be found in the medical education literature.
  12. Considering competing conceptual frameworks can maximise your chances of selecting the most appropriate framework for your problem or situation while guarding against premature, inappropriate or sub-optimal choices.
  13. Scholars are responsible for making explicit in their publications the assumptions and principles contained in the conceptual framework(s) they use.

The third example seems (to me) to be an unnecessarily long diversion into the author’s own research. And while the first two examples are quite practical and relevant, the third is quite abstract, possibly because of the focus on educational research and study design. I wonder how many readers will find relevance in it.

In a research context, conceptual frameworks can help to both frame or formulate the initial questions, identify variables for analysis, and interpret results.

The conclusion of the paper is very nice summary of the main ideas. However, it also introduces some new ideas, which probably should have been included in the main text.

Conceptual frameworks provide different lenses for looking at, and thinking about, problems and conceptualising solutions. Using a variety of frameworks, we open ourselves up to different solutions and potentially avoid falling victim to our own assumptions and biases.

It’s important to remember that frameworks magnify and illuminate only certain aspects of each problem, leaving other aspects in the dark i.e. there is no single framework that does everything.

Novice educators and researchers may find it daunting to work with frameworks, especially when you consider that they may not be aware of the range of possible frameworks.

How do you choose one framework over another? It’s important to discuss your problem and potential solutions with more experienced colleagues and experts in the field. Remember however, that some experts may be experts partly because they’ve spent a long time committed to a framework/way of seeing the world, which may make it difficult for them to give you an unbiased perspective.

Reviewing the relevant literature also helps to identify what frameworks other educators have used in addressing similar problems. The specific question you’re asking is also an important means of identifying a relevant framework.

Categories
AI clinical reading research

Resource: Towards a curated library for AI in healthcare

I’ve started working on what will eventually become a curated library of resources that I’m using for my research on the impact of artificial intelligence and machine learning on clinical practice. At the moment it’s just a public repository of the articles, podcasts, blog posts that I’ve read or listened to and then saved in Zotero. You can subscribe to the feed so that when new items are added you’ll get a notification in whatever feedreader you use. Click on the image below to see the library.

The main library view in the web version of Zotero (note that the public view is different to what I’m showing here, since I have the beta version enabled; all of the functionality is the same though).

For now, it’s a public – but closed – group that has a library, meaning that anyone can see the list of library items but no-one can join the group, which means no-one else can add, edit or delete resources (for now). This is just because I’m still figuring out how it works and don’t want the additional admin of actually managing anything. I may open this up in future if it looks like anyone else is interested in joining and contributing. I’m also not sharing any of the original articles and books but will look into the implications of sharing these publicly, considering that most of them – being academic articles – are subject to copyright restrictions from the publishers.

The library/repository isn’t meant to be exhaustive but rather a small selection of articles and other resources that I think might be useful for clinicians, educators, students and researchers with an interest in AI in healthcare. At the moment it’s just a dump of some of the resources I’ve used and include notes and links associated with the resources. I’m going to revisit the items in the list and try to add more useful summaries and descriptions of everything with the idea that this could be something like a curated, annotated reading/watching/listening list for anyone with an interest in the topic.

Categories
AI research

#APaperADay – The Last Mile: Where Artificial Intelligence Meets Reality

“…implementation should be seen as an agile, iterative, and lightweight process of obtaining training data, developing
algorithms, and crafting these into tools and workflows.”

Coiera, E. (2019). The Last Mile: Where Artificial Intelligence Meets Reality. Journal of Medical Internet Research, 21(11), e16323. https://doi.org/10.2196/16323

A short article (2 pages of text) describing the challenges of building AI systems without understanding that technological solutions are only relevant when they solve real world problems that we care about, and when they are built within the systems that they will ultimately be used in.

Note: I found it hard not to just rewrite the whole paper because I really like the way Coiera writes and find that his economy with words makes it hard to cut things out i.e. I think that it’s all important text. I tried to address this by making my notes without looking at the original article, and then going back over the notes and rewriting them.


Technology shapes us as we shape it. Humans and machines form a sociotechnical system.

The application of technology should be shaped by the problem at hand and not the technology itself. But we see the opposite of this today, with companies building technologies that are then used to solve “problems” that no-one thought were problems. Most social media fits this description.

Technological innovations may create new classes of solution but it’s only in the real world that we see what problems are worth addressing and what solutions are most appropriate. Just because a technology is presented as a solution it’s up to us to make choices about whether the solution is the best solution, or whether the problem is important.

There are two broad research agendas for AI:

  1. The technical aspects of building machine intelligence.
  2. The application of machine intelligence to real world problems that we care about.

In our drive to accelerate progress in the first area, we may lose sight of the second. For example, even though image recognition is developing very quickly the use of image recognition systems has had little clinical impact to date. In some cases, it may even make clinical outcomes worse. For example when the overdiagnosis of a condition causes an increase in management (and associated costs and exposure to harm), even though treatment options remain unchanged.

There are three stages of development with data-driven technologies like AI-based systems:

  1. Data are acquired, labelled and cleaned.
  2. Building and testing technical performance in controlled environments.
  3. Algorithms are applied in real world contexts.

It’s only really in the last stage where it’s clear that “AI does nothing on its own” i.e. all technology is embedded in the sociotechnical systems mentioned earlier and are intricately connected to people and the choices that people make. This makes sociotechnical systems messy and complex, and therefore immune to the “solutions” touted by tecnology companies.

Some of the “last mile” challenges of AI implementation include:

  1. Measurement: We use standard metrics of AI performance to show improvement. But these metrics are often only useful in controlled experiments and are divorced from the practical realities of implementation in the clinical context.
  2. Generalisation and calibration: AI systems are trained on historical data and so future performance of the algorithm is dependent on how well the historical data matches the new context.
  3. Local context: The complexity of interacting variables within local contexts mean that any system will have to be fine-tuned to the organisation in which it is embedded. Organisations also change over time, meaning that the AI will need to be adjusted as well.

The author also provides possible solutions to these challenges.

Software development has moved from a linear process to an iterative model where systems are developed in situ through interaction with users in the real world. Google, Facebook, Amazon, etc. do this all the time by exposing small subsets of users to changes in the platform, and then measuring differences in engagement using metrics that the platforms care about (time spent on Facebook, or number of clicks on ads).

In healthcare we’ll need to build systems in which AI-based technologies are implemented, not as completed solutions, but with the understanding that they will need refinement and adaptation through iterative use in complex, local contexts. Ideally, they will be built within the systems they are going to be used in.

Categories
reading research

#APaperADay – It’s Time for Medical Schools to Introduce Climate Change Into Their Curricula

This is my first attempt to share a short summary of a paper that I’ve read as part of my #APaperADay project, where I try to put aside the last 30-60 minutes of every day for reading and summarising an article. Obviously, I’m not going to be able to finish an article a day so these won’t be daily posts.

Also, paper selection is likely to be arbitrary. This isn’t an attempt to find “the best” or “most interesting” articles. It’s probably just me going through my reading list and choosing something based on how much time I have left in the day.

I’m going to try and make these summaries short and may also start adding my own commentary within the main text as part of an attempt to engage more deeply with the subject. Please don’t assume that my summaries are 1) accurate representations of the actual content, 2) substitutes for reading the original, 3) appropriate sources of knowledge in their own right.


Citation: Wellbery, C., Sheffield, P., Timmireddy, K., Sarfaty, M., Teherani, A., & Fallar, R. (2018). It’s Time for Medical Schools to Introduce Climate Change Into Their Curricula. Academic Medicine, 93(12), 1774–1777. https://doi.org/10.1097/ACM.0000000000002368

This is a position piece that begins by describing the impact of human beings on the planet (the Anthropocene).

The effects of climate change will disproportionately affect the most vulnerable populations (the very old and very young, those who are sick, and whose who are poor).

Current efforts in HPE policy have been directed towards preparing health professionals to help address the effects of climate change. However, medical school curricula have not made much headway in updating their curricula to explicitly include this new content.

Rationale for including climate change in medical education

  1. Today’s generation of HP students are those who have a large stake in developing a strategic response.
  2. The health effects of climate change and getting worse and HP will need to be adequately prepared to meet with challenge.
  3. It is everyone’s responsibility to drive efforts at reducing the environmental footprint of healthcare, which is a non-trivial contributor to global warming.
  4. Climate change will disproportionately affect the most vulnerable populations, who HP are obliged to help.
  5. The inclusion of climate change will facilitate the development of thinking skills that are (hopefully) transferable to other aspects of the curriculum.

Current curricular interventions

There needs to be a rethinking of the division between public and individual health. Climate change will increasingly affect the environment, which will increasingly affect people. These complex interactions among complex variables will affect political, social, scientific, and economic domains, all of which are currently beyond the scope of medical education.

Climate change as a topic of discussion can be relatively easily integrated into medical curricula, alongside already existing conditions. For example, a discussion on asthma could include the negative effect of global warming on this particular condition. In other words, climate change need not be included as a separate module/subject/topic but could be integrated with the current curriculum.

“Climate-relevant examples and the overarching macrocosmic mechanisms linking them to individual disease processes could broaden discussions of such topics as cardiovascular health (related to changing air quality), sexually transmitted infections (related to displaced populations), and mental health disorders (related both to displaced populations and also to extreme weather).”

The article finishes with a few examples of how some medical schools have incorporated climate change into their curricula. It seems likely that this is something that will need to happen over time i.e. programmes can’t simply dump a load of “global warming/climate change” content into the curriculum overnight.

Comment: This is a short paper that might be interesting for someone who’d like to know why climate change should be a topic of interest in health professions education. Even if this is something that you’re only passingly familiar with, you’re probably not going to get much from it. But it may be useful to pass on to someone who thinks that climate change isn’t relevant in a health professions curriculum.


Categories
Publication research

Resource: The Scholarly Kitchen podcast.

The Society for Scholarly Publishing (SSP) is a “nonprofit organization formed to promote and advance communication among all sectors of the scholarly publication community through networking, information dissemination, and facilitation of new developments in the field.” I’m mainly familiar with SSP because I follow their Scholarly Kitchen blog series and only recently came across the podcast series throught the 2 episodes on Early career development (part 1, part 2). You can listen on the web at the links or subscribe in any podcast client by searching for “Scholarly Kitchen”.


Note: I’m the editor and founder of OpenPhysio, an open-access, peer-reviewed online journal with a focus on physiotherapy education. If you’re doing interesting work in the classroom, even if you have no experience in publishing educational research, we’d like to help you share your stories.

Categories
AI research

Comment: Should we use AI to make us quicker and more efficient researchers?

The act of summarising is not neutral. It involves decisions and choices that feed into the formation of knowledge and understanding. If we are to believe some of the promises of AI, then tools like Paper Digest (and the others that will follow) might make our research quicker and more efficient, but we might want to consider if it will create blindspots.

Beer, D. (2019). Should we use AI to make us quicker and more efficient researchers? LSE Impact blog.

I have some sympathy for the argument that, as publication in our respective fields increases in volume and speed, it will become impossible to stay on top of what’s current. I’m also fairly confident that AI-generated research summaries are going to get to the point where you’ll be able to sign up for a weekly digest that includes only the most important and relevant articles for my increasingly narrow area of interest. Obviously, “important” and “relevant” are terms that contain implicit assumptions about who I am and what I’m interested in.

Where I differ from the author of the post I’ve linked to is that I don’t see anyone mistaking the summary of the research for the research itself. No-one is going to read the weekly digest and think that they’ve done the work of engaging with the details. You’ll get a 10 minute narrative overview of recent work published in your area, note the 3-5 articles that grab your attention, read those abstracts and then maybe get to grips with 1 or 2 of them. Of course, there are concerns with this:

  • Who is deciding what is included in the summary overview? Ideally, it should be you and not Elsevier, for example.
  • How long will it be before you really can trust that the summary is accurate? But, you also have no way of trusting summaries written by people, other than by doing the work and reading the original.
  • Whatever doesn’t show up in this feed may be ignored. But you can – and should – have multiple sources of information.

However, the benefit of AI is that it will take what is essentially a firehose of research findings and limit it to something you can make sense of and potentially do something with. At the moment I mainly rely on people I trust (i.e. those who I follow on Twitter, for example) to share the research they think is important. In addition to the value of having a human-curated feed there’s also a serendipity to finding articles this way. However, none of those people are sharing things specifically for me, so even then it’s hit and miss. I think an AI-based system will be better for separating the signal from the noise.

Note: I tried to use the service for 3 of my own open access articles and all 3 times it returned the same result, which wasn’t a summary of any of what I had submitted. So, definitely still in beta.

Categories
Publication research scholarship

Article: Which are the tools available for scholars?

In this study, we explored the availability and characteristics of the assisting tools for the peer-reviewing process. The aim was to provide a more comprehensive understanding of the tools available at this time, and to hint at new trends for further developments…. Considering these categories and their defining traits, a curated list of 220 software tools was completed using a crowdfunded database to identify relevant programs and ongoing trends and perspectives of tools developed and used by scholars.

Israel Martínez-López, J., Barrón-González, S. & Martínez López, A. (2019). Which Are the Tools Available for Scholars? A Review of Assisting Software for Authors during Peer Reviewing Process. Publications, 7(3): 59.

The development of a manuscript is inherently a multi-disciplinary activity that requires a thorough examination and preparation of a specialized document.

This article provides a nice overview of the software tools and services that are available for authors, from the early stages of the writing process, all the way through to dissemination of your research more broadly. Along the way the authors also highlight some of the challenges and concerns with the publication process, including issues around peer review and bias.

This classification of the services is divided into the following nine categories:

  1. Identification and social media: Researcher identity and community building within areas of practice.
  2. Academic search engines: Literature searching, open access, organisation of sources.
  3. Journal-abstract matchmakers: Choosing a journal based on links between their scope and the article you’re writing.
  4. Collaborative text editors: Writing with others and enhancing the writing experience by exploring different ways to think about writing.
  5. Data visualization and analysis tools: Matching data visualisation to purpose, and alternatives to the “2 tables, 1 figure” limitations of print publication.
  6. Reference management: Features beyond simply keeping track of PDFs and folders; export, conversion between citation styles, cross-platform options, collaborating on citation.
  7. Proofreading and plagiarism detection: Increasingly sophisticated writing assistants that identify issues with writing and suggest alternatives.
  8. Data archiving: Persistent digital datasets, metadata, discoverability, DOIs, archival services.
  9. Scientometrics and Altmetrics: Alternatives to citation and impact factor as means of evaluating influence and reach.

There’s an enormous amount of information packed into this article and I found myself with loads of tabs open as I explored different platforms and services. I spend a lot of time thinking about writing, workflow and compatability, and this paper gave me even more to think about. If you’re fine with Word and don’t really get why anyone would need anything else, you probably don’t need to read this paper. But if you’re like me and get irritated because Word doesn’t have a “distraction free mode”, you may find yourself spending a couple of hours exploring options you didn’t know existed.


Note: I’m the editor and founder of OpenPhysio, an open-access, peer-reviewed online journal with a focus on physiotherapy education. If you’re doing interesting work in the classroom, even if you have no experience in publishing educational research, we’d like to help you share your stories.

Categories
AI clinical research

Survey: Physiotherapy clinicians’ perceptions of artificial intelligence in clinical practice

We know very little about how physiotherapy clinicians think about the impact of AI-based systems on clinical practice, or how these systems will influence human relationships and professional practice. As a result, we cannot prepare for the changes that are coming to clinical practice and physiotherapy education. The aim of this study is to explore how physiotherapists currently think about the potential impact of artificial intelligence on their own clinical practice.

Earlier this year I registered a project that aims to develop a better understanding of how physiotherapists think about the impact of artificial intelligence in clinical practice. Now I’m ready to move forward with the first phase of the study, which is an online survey of physiotherapy clinicians’ perceptions of AI in professional practice. The second phase will be a series of follow up interviews with survey participants who’d like to discuss the topic in more depth.

I’d like to get as many participants as possible (obviously) so would really appreciate it if you could share the link to the survey with anyone you think might be interested. There are 12 open-ended questions split into 3 sections, with a fourth section for demographic information. Participants don’t need a detailed understanding of artificial intelligence and (I think) I’ve provided enough context to make the questionnaire simple for anyone to complete in about 20 minutes.

Here is a link to the questionnaire: https://forms.gle/HWwX4v7vXyFgMSVLA.

This project has received ethics clearance from the University of the Western Cape (project number: BM/19/3/3).

Categories
education leadership research scholarship

SAAHE podcast on building a career in HPE

In addition to the In Beta podcast that I host with Ben Ellis (@bendotellis), I’m also involved with a podcast series on health professions education with the South African Association of Health Educators (SAAHE). I’ve just published a conversation with Vanessa Burch, one of the leading South African scholars in this area.

You can listen to this conversation (and earlier ones) by searching for “SAAHE” in your podcast app, subscribing and then downloading the episode. Alternatively, listen online at http://saahe.org.za/2019/06/8-building-a-career-in-hpe-with-vanessa-burch/.

In this wide-ranging conversation, Vanessa and I discuss her 25 years in health professions education and research. We look at the changes that have taken place in the domain over the past 5-10 years and how this has impacted the opportunities available for South African health professions educators in the early stages of their careers. We talk about developing the confidence to approach people you may want to work with, from the days when you had to be physically present at a conference workshop, to explore novel ways to connect with colleagues in a networked world. We discuss Vanessa’s role in establishing the Southern African FAIMER Regional Institute (SAFRI), as well as the African Journal of Health Professions Education (AJHPE) and what we might consider when presented with opportunities to drive change in the profession.

Vanessa has a National Excellence in Teaching and Learning Award from the Council of Higher Education and the Higher Education Learning and Teaching Association of South Africa (HELTASA), and holds a Teaching at University (TAU) fellowship from the Council for Higher Education of South Africa. She is a Deputy Editor at the journal Medical Education, and Associate Editor of Advances in Health Sciences Education. Vanessa was Professor and Chair of Clinical Medicine at the University of Cape Town from 2008-2018in health and is currently Honorary Professor of Medicine at UCT. She works as an educational consultant to the Colleges of Medicine of South Africa.

Categories
research

What does scholarship sound like?

Creative work is scholarly work

The Specialist Committee recognises the importance of both formal academic research and creative outputs for the research cultures in many departments, as well as for individual researchers; it thus aims to give equal value to theoretical/empirical research (i.e. historical, theoretical, analytic, sociological, economic, etc. studies from an arts perspective) and creative work (i.e. in cases where the output is the result of a demonstrable process of investigation through the processes of making art.); the latter category of outputs is treated as fully equivalent to other types of research output, but in all cases credit is only given to those outputs which demonstrate quality and have a potential for impact and longevity.

The South African National Research Foundation has recently shared guidelines for the recognition of creative scholarly outputs, which serves to broaden the concept of what kind of work can be regarded – and importantly, recognised – as “scholarly”. The guidelines suggest that the creative work could include (among others):

  • Non-conventional academic activities related to creative work and performance: Catalogues, programmes, and other supporting documentation describing the results of arts research in combination with the works themselves;
  • In Drama and theatre: scripts or other texts for performances and the direction of and design (lighting, sound, sets, costumes, properties, etc.) for live presentations as well as for films, videos and other types of media presentation; this also applies to any other non-textual public output (e.g. puppetry, animated films, etc.), provided they can be shown to have entered the public domain;

I’m going to talk about podcasts as scholarly outputs because I’m currently involved in three podcast projects; In Beta (conversations about physiotherapy education), SAAHE health professions educators (conversations about educational research in the health professions), and a new project to document the history of the physiotherapy department at the University of the Western Cape.

These podcasts take up a lot of time; time that I’m not spending writing the articles that are the primary form of intellectual capital in academia and I wondered, in the light of the new guidelines from the NRF, if a podcast could be considered to be a scholarly output. There are other reasons for why we may want to consider recognising podcasts as scholarly outputs:

  1. They increase access for academics who are doing interesting work but who, for legitimate reasons, may not be willing to write an academic paper.
  2. They increase diversity in the academic domain because they can be (should be?) published in the language of preference of the hosts.
  3. They reduce the dominance of the PDF for knowledge distribution, which could only be a good thing.
  4. Conversations among academics is a legitimate form of knowledge creation, as new ideas emerge from the interactions between people (like, for example, in a focus group discussion).
  5. Podcasts – if they are well-produced – are likely to have a wider audience than academic papers.
  6. Audio gives an audience another layer of interesting-ness when compared to reading a PDF.
  7. Academic podcasts may make scholarship less boring (although, to be honest, we’re talking about academics, so I’m not convinced with this one).

What do we mean by “scholarship”?

Most people think of scholarly work as the research article (and probably the conference presentation) but there’s no reason that the article/PDF should remain the primary form of recognised scholarly output. It also requires that anyone wanting to contribute to a scholarly conversation must learn the following:

  • “Academic writing” – the specific grammar and syntax we expect from our writers.
  • Article structure – usually, the IMRAD format (Introduction, Methods, Results and Discussion).
  • Journals – where to submit, who is most likely to publish, what journals cater for which audiences.
  • Research process – I’m a big fan of the scientific method but sometimes it’s enough for a new idea to be shared without it first having to be shown to be “true”.

Instead of expecting people to first learn the traditions and formal structures that we’ve accepted as the baseline reality for sharing scholarly work, what if we just asked what scholarship is? Instead of defining “scholarship” as “research paper/conference presentation”, what if we started with what scholarship is considered to be and then see what maps onto that? From Wikipedia:

The scholarly method or scholarship is the body of principles and practices used by scholars to make their claims about the subject as valid and trustworthy as possible and to make them known to the scholarly public… Scholarship…is creative, can be documented, can be replicated or elaborated, and is peer-reviewed.

So there’s nothing about publishing PDFs in journals as part of this definition of scholarship. What about the practice of doing scholarly work? I’m going to use Boyer’s model of scholarship, not because it’s the best but because it is relatively common and not very controversial. Boyer includes four categories of scholarly work (note that this is not a series of progressions that one has to move through in order to reach the last category…each category is a form of scholarship on its own):

  • Scholarship of discovery: what is usually considered to be basic research or the search for new knowledge.
  • Scholarship of integration: where we aim to give meaning to isolated facts that consider them in context; it aims to ask what the findings of discovery mean.
  • Scholarship of application: the use of new knowledge to solve problems that we care about.
  • Scholarship of teaching: the examination of how teaching new knowledge can both educate motivate those in the discipline; it is bout sharing what is learned.

Here are each of Boyer’s categories with reference to podcasts:

  • Discovery (advancing knowledge): Can we argue that knowledge can be advanced through conversation? Is there something Gestalt in a conversation where a new whole can be an emergent property of the constituent parts? How is a podcast conversation any different to a focus group discussion where the cohort is a sample with specific characteristics of interest?
  • Integration (synthesis of knowledge): Can the editing and production of a podcast, using the conversation as the raw data, be integrated with other knowledge in order to add new levels of explanation and critique? This could either be in the audio file or as show notes. Could podcast guests be from different disciplines, exploring a topic from different perspectives?
  • Application/engagement (applied knowledge): Can we use emergent knowledge from the podcast to do something new in the world? Can we take what is learned from the initial conversation, which may have been modified and integrated with other forms of knowledge (in multiple formats e.g. text, images, video), and apply it to a problem that we care about?
  • Teaching (openly shared knowledge): Can we, after listening to a podcast and applied what we learned, share what was done, as well as the result, with others in order that the process (methods) and outcomes (results) can be evaluated by our peers?

This may not be a definitive conclusion to the question of whether podcasts could be regarded as scholarly work but at the very least, it suggests that it’s something we could consider. If you accept that a podcast might be regarded as scholarly we can then ask how we might go about formally recognising it as such.

Workflow to distribute scholarly work

I’m going to use an academic, peer-reviewed, traditional journal (or at least, the principle of one) to explore a workflow that we can use to get a sense of how a podcast could be formally recognised as scholarly work. We first need to note that a journal has two primary functions:

  1. Accreditation, which is usually a result of the journals peer review process, and their brand/history/legacy. The New England Journal of Medicine is a recognised “accreditor” of scholarly work, not because there is anything special about the journal but simply because it is the New England Journal of Medicine. Their reputation is enough for us to trust them when they say that the ideas presented in a piece of work have been tested through peer review and has not been found wanting.
  2. Distribution, which in the past meant printing those ideas on paper and literally shipping them around the world. Today, this distribution function has changed to Discoverability; the journal does what it can to make sure your article can be found by search engines, and if you’re the New England Journal of Medicine you don’t need to do much because Google will do your quality signalling for you by surfacing your articles above others. Theefore, ournals host content and try to increase the chances that we can find it, and the distribution function has largely been taken over by us (because we share articles on behalf of the journals).

By separating out the functions of a journal we can see that it’s possible for a journal to accredit work that it does not necessarily have to host itself. We could have a journal that is asked to accredit a piece of work i.e. signal to readers (or in our case, listeners) that the work has passed some set of criteria that we use to describe it as “scholarly”.

What might this workflow look like? Since I’m trying to show how podcasts could be accredited within the constraints of the existing system of journal publications, I’m going to stick to a traditional process as closely as possible, even though I think that this makes the process unnecessarily complicated, especially when you think about what needs to happen following the peer review. Here is what I think the accreditation process could look like:

  1. Create a podcast episode (this is basically a FGD) on a topic of interest where guests discuss a question or a problem that their community of peers recognises as valid. This could be done by a call to the community for topics of interest.
  2. Edit the podcast, including additional resources and comments as show notes. The podcast creators could even include further comments and analysis, either before, during or after the initial recorded conversation. The audio includes the raw data (the recorded conversation), real-time analysis and critique by participants, discussion of potential applications of the emergent knowledge, and conclusion (maybe via post-recording reflection and analysis).
  3. Publish the episode on any podcast-hosting platform. The episode is now in the public domain.
  4. Submit a link to the episode to a journal, which embeds the podcast episode as a post (“article”) along with a short description of what it includes (like an abstract), a description of the process of creation (like the methods), the outcome of the conversation (like a conclusion), and a list of additional reading (like a reference list).
  5. The journal begins the process of accrediting the podcast by allocating peer reviewers, whose reviews are published alongside the embedded podcast in the journal.
  6. Reviewers review the “methods”, “conclusions”, “references” and knowledge claims of the podcast guests, add comments to the post, and highlight the limitations of the episode. The show notes include a description of the process, participants, additional readings, DOI, etc. This could be where the process ends; the journal has used peer review to assign a measure of “quality” to the episode and does not attempt to make a judgement on “value” (which is what journals do when they reject submissions). It is left to the listener to decide if the podcast has value for them.
  7. The following points are included for completeness as they follow a traditional iterative process following peer review. I don’t think these steps are necessary but are only included to map the workflow onto a process that most authors will be familiar with:
    1. The podcast creators make some changes to the audio file, perhaps by including new analysis and comments in the episode, or maybe by adding new information to the textual component of the episode (i.e. the show notes).
    2. The new episode is released. This re-publication of the episode would need to be classified as an entirely different version since the original episode would have been downloaded and shared to networks. An updated version would, therefore, need a new URL, a new page on the podcast hosting service, etc.

In the example workflow above, the journal never hosts the audio file and does not “publish” the podcast. It includes an embedded version of the episode, the show notes (which include the problem under discussion, the participants and their bios, an analysis of the conversation, and a list of references), as well as the full peer reviews. Readers/listeners then decide on the “importance” of the episode and whether or not to assign value to it. In other words, the readers/listeners decide what work is valuable, rather than the peer reviewers or the journal.

In summary, I’ve tried to describe why podcasts are potentially a useful format for creating and sharing the production of new knowledge, presented a framework for determining if a podcast could be considered to be scholarly, and described the workflow and some practical implications of an accreditation process using a traditional journal.