Reflections on the In Beta podcast and community

It’s been about a year and a half since Ben and I started the In Beta community (see my first post in July 2017) and I wanted to reflect on what we’ve achieved in the past 18 months or so. Here are the major aspects of the project with some statistics and my thoughts on the process.

Website: We’re hosting our website on a server provided by the University of the Western Cape and use open source software (WordPress) to build the site, which means that the project costs Ben and I nothing except our time and energy. A few months ago I made a few big changes to the site, which hadn’t been updated since it launched, including a new theme and layout, new per-episode images, and an embedded media player for each episode. This is also going to be more important as the site becomes more central to our plans and needs to do more than simply distribute the audio for the podcasts.

We’ve had a fair amount of traffic since we launched the site in October 2017; far more than I expected. The numbers are obviously quite low relative to more popular sites, but consider that this is a project about physiotherapy education.

Most of our visitors came from the UK (where Ben lives) and the Netherlands (where Joost lives). I’m not sure if that’s a coincidence or if the two of them are just uncommonly popular. Incidentally, Joost has been a major supporter and promoter of the project through his connections with ENPHE and we hope that this collaboration continues to grow.


Podcasts: We’ve released 8 episodes including our first one in October 2017, so we publish about one episode every 1.5 months. We have another 3 episodes recorded but which we haven’t finished editing yet. The audio editing is, by far, the most time-consuming part of the process. We’re hoping to limit the hassle of this component by improving the quality of the initial recording, through 1) getting better at moderating the conversations and so having less to cut, and 2) making more of an effort to record better audio in the first place. Here are the 8 episodes we’ve published so far, along with the number of times each has been downloaded. These statistics exclude the first 50 or so downloads of the first episode, which was hosted on Soundcloud before we moved to our own distribution platform.

  1. Inquiry-based learning (3 October 2017) – 182
  2. Internationalisation of the curriculum (9 October 2017) – 191
  3. Clinical practice assessment forms (10 November 2017) – 158
  4. Guided choice-based learning (9 February 2018) – 82
  5. A critical pedagogy for online learning (28 February 2018) – 42
  6. New paradigms for physio education (9 May 2018) – 94
  7. Cost and value in health professions education (4 June 2018) –70
  8. Classroom-based assessment (6 September 2018) – 46

Here are the top 10 countries by number of downloads:

Projects: One of our original ideas was to use the website as a way to share examples of classroom exercises, assignments, and teaching practices that others would be able to use as a resource. The plan was to describe in a fair amount of detail the process for setting up a learning task that others could simply copy, maybe with a few minor tweaks. The project pages would include the specific learning outcomes that the lecturer hopes to achieve, comprehensive descriptions of the learning activities, links to freely available resources, and examples of student work. This aspect of In Beta hasn’t taken off as much as we would’ve liked but the potential is still there and will hopefully continue growing over time.

Google Docs: We started with Google Docs as a way to plan for our podcast recordings, using a templated outline that we’d invite guests to complete. The idea is that guests on the podcast will use the template to establish the context for the conversation, including the background, the problem they’re trying to address, and a reading list for interested participants. We then take some of that information and incorporate it into the show notes for the episode and leave the Google Doc online for further reading if anyone is interested. The process (and template) has remained more or less the same since we initially described it but I’m uncertain about whether or not we should include it going forward. It seems like a lot of PT to ask guests to complete and, without statistics for Docs, we can’t be sure if anyone is going there. On the other hand, it really does seem to be good preparation for us to have a deep dive into the topic.

Membership: We had about 100 people join the Google+ community but saw little engagement on the site. I think that this is understandable considering that most people have more than enough going on in their personal and professional lives to add yet another online destination to their lists. Most people are already on several social media platforms and it’s not reasonable to expect them to add Google+ just for this project. So we weren’t too upset to see that Google is planning to sunset the consumer version of Google+, so in some ways it’s a bit of a relief not to have to worry about managing the community in different places. We’re in the process of asking people to migrate to the project website and sign up for email notifications of announcements.

Conference collaborations: Ben and I worked with Joost to run two In Beta workshops at the IPSM (Portugal) and ENPHE conferences (Paris) in 2018. We based both sessions on the Unconference format and used them as experiments to think differently about how conference workshops could be useful for participants in the room, as well as those who were “outside” of it. While neither of the workshops went exactly how we planned, I think the fact that both of these sessions actually happened, in large part due to the work that Joost and Ben put in, was a success in itself. We’ve recorded our thoughts on this process and will publish that as an episode early in 2019. It’d be nice to have more of these sessions where we try to do something “in the world”.

Plans for 2019: Our rough ideas for the next 12 months include the following:

  • More frequent podcast episodes, which should be possible if we can reduce the amount of time it takes to edit each episode. It’d also be nice to get assistance with the audio editing, so if you’re interested in being involved and have an interest in that kind of thing, let us know.
  • Work on more collaborative projects with colleagues who are interested in alternative approaches to physiotherapy education. For example, it might be interesting to publish an edited “book” of short stories related to physiotherapy education. It could be written by students, educators and clinicians, and might cover a broad range of topics that explore physiotherapy education from a variety of perspectives.
  • Grow the community so that In Beta is more than a podcast. We started the project because we wanted to share interesting conversations in physiotherapy education and we think that there’s enormous scope for this idea to be developed. But we also know that we’re never going to have all the good ideas ourselves and so we need to involve more of the people doing the interesting work in classrooms and clinical spaces around the world.
  • Host a workshop for In Beta community members, possibly at a time when enough of us are gathered together in the same place. Maybe in Europe somewhere. Probably in May. Something like a seminar or colloquium on physiotherapy education. If this sounds like something you may like to be involved with, please let us know.

It’s easy to forget what you’ve achieved when you’re caught up in the process. I think that both Ben and I would probably like to have done a bit more on the project over the past 18 months but if I look at where we started (a conversation over coffee at a conference in 2016) then I’m pretty happy with what we’ve accomplished. And I’m excited for 2019.

My presentation for the Reimagine Education conference

Here is a summarised version of the presentation I’m giving later this morning at the Reimagine Education conference. You can download the slides here.

In Beta and sunsetting consumer Google+

Action 1: We are shutting down Google+ for consumers.

This review crystallized what we’ve known for a while: that while our engineering teams have put a lot of effort and dedication into building Google+ over the years, it has not achieved broad consumer or developer adoption, and has seen limited user interaction with apps. The consumer version of Google+ currently has low usage and engagement: 90 percent of Google+ user sessions are less than five seconds.

I don’t think it’s a surprise to anyone that Google+ wasn’t a big hit although I am surprised that they’ve taken the step to shut it down for consumers. And this is the problem with online communities in general; when the decision is made that they’re not cost-effective, they’re shut down regardless of the value they create for community members.

When Ben and I started In Beta last year we decided to use Google+ for our community announcements and have been pretty happy with what we’ve been able to achieve with it. The community has grown to almost 100 members and, while we don’t see much engagement or interaction, that’s not why we started using it. For us, it was to make announcements about planning for upcoming episodes and since we didn’t have a dedicated online space, it made sense to use something that already existed. Now that Google+ is being sunsetted we’ll need to figure out another place to set up the community.

 

adapting to constant change

The human work of tomorrow will not be based on competencies best-suited for machines, because creative work that is continuously changing cannot be replicated by machines or code. While machine learning may be powerful, connected human learning is novel, innovative, and inspired.

Source: Jarche, H. (2018). adapting to constant change.

A good post on why learning how to learn is the only reasonable way to think about the future of work (and professional education). The upshot is that Communities of Practice are implicated in helping us adapt to working environments that are constantly changing, as will most likely continue to be the case.

However, I probably wouldn’t take the approach that it’s “us vs machines” because I don’t think that’s where we’re going to end up. I think it’s more likely that those who work closely with AI-based systems will outperform and replace those who don’t. In other words, we’re not competing with machines for our jobs; we’re competing with other people who use machines more effectively than we do.

Trying to be better than machines is not only difficult but our capitalist economy makes it pretty near impossible.

This is both true and a bit odd. No-one thinks they need to be able to do complex mathematics without calculators, and those who are better at using calculators can do more complex mathematics. Why is it such a big leap to realise that we don’t have to be better image classifiers than machines either? Let’s accept that diagnosis from CT will be performed by AI and focus on how that frees up physician time for other human- and patient-centred tasks. What will medical education look like when we’re teaching students that adapting while working with machines is the only way to stay relevant? I think that clinicians who graduate from medical schools who take this approach are more likely to be employed in the future.

Technology Beyond the Tools

You didn’t need to know about how to print on a printing press in order to read a printed book. Writing implements were readily available in various forms in order to record thoughts, as well as communicate with them. The use was simple requiring nothing more than penmanship. The rapid advancement of technology has changed this. Tech has evolved so quickly and so universally in our culture that there is now literacy required in order for people to effectively and efficiently use it.

Reading and writing as a literacy was hard enough for many of us, and now we are seeing that there is a whole new literacy that needs to be not only learned, but taught by us as well.

Source: Whitby, T. (2018). Technology Beyond the Tools.

I wrote about the need to develop these new literacies in a recent article (under review) in OpenPhysio. From the article:

As clinicians become single nodes (and not even the most important nodes) within information networks, they will need data literacy to read, analyse, interpret and make use of vast data sets. As they find themselves having to work more collaboratively with AI-based systems, they will need the technological literacy that enables them to understand the vocabulary of computer science and engineering that enables them to communicate with machines. Failing that, we may find that clinicians will simply be messengers and technicians carrying out the instructions provided by algorithms.

It really does seem like we’re moving towards a society in which the successful use of technology is, at least to some extent, premised on your understanding of how it works. As educators, it is incumbent on us to 1) know how the technology works so that we can 2) help students use it effectively while at the same time avoid exploitation by for-profit companies.

See also: Aoun, J. (2017). Robot proof: Higher Education in the Age of Artificial Intelligence. MIT Press.

Another Terrible Idea from Turnitin | Just Visiting

Allowing the proliferation of algorithmic surveillance as a substitution for human engagement and judgment helps pave the road to an ugly future where students spend more time interacting algorithms than instructors or each other. This is not a sound way to help writers develop robust and flexible writing practices.

Source: Another Terrible Idea from Turnitin | Just Visiting

First of all, I don’t use Turnitin and I don’t see any good reason for doing so. Combating the “cheating economy” doesn’t depend on us catching the students; it depends on creating the conditions in which students believe that cheating offers little real value relative to the pedagogical goals they are striving for. In general, I agree with a lot that the author is saying.

So, with that caveat out of the way, I wanted to comment on a few other pieces in the article that I think make significant assumptions and limit the utility of the piece, especially with respect to how algorithms (and software agents in particular) may be useful in the context of education.

  • The use of the word “surveillance” in the quote above establishes the context for the rest of the paragraph. If the author had used “guidance” instead, the tone would be different. Same with “ugly”; remove that word and the meaning of the sentence is very different. It just makes it clear that the author has an agenda which clouds some of the other arguments about the use of algorithms in education.
  • For example, the claim that it’s a bad thing for students to interact with an algorithm instead of another person is empirical; it can be tested. But it’s presented here in a way that implies that human interaction is simply better. Case closed. But what if we learned that algorithmic guidance (via AI-based agents/tutors) actually lead to better student outcomes than learning with/from other people? Would we insist on human interaction because it would make us feel better? Why not test our claims by doing the research before making judgements?
  • The author uses a moral argument (at least, this was my take based on the language used) to position AI-based systems (specifically, algorithms) as being inherently immoral with respect to student learning. There’s a confusion between the corporate responsibility of a private company – like Turnitin – to make a profit, and the (possibly pedagogically sound) use of software agents to enhance some aspects of student learning.

Again, there’s some good advice around developing assignments and classroom conditions that make it less likely that students will want to cheat. This is undoubtedly a Good Thing. However, some of the claims about the utility of software agents are based on assumptions that aren’t necessarily supported by the evidence.

Emotions and assessment: considerations for rater‐based judgements of entrustment

We identify and discuss three different interpretations of the influence of raters’ emotions during assessments: (i) emotions lead to biased decision making; (ii) emotions contribute random noise to assessment, and (iii) emotions constitute legitimate sources of information that contribute to assessment decisions. We discuss these three interpretations in terms of areas for future research and implications for assessment.

Source: Gomez‐Garibello, C. and Young, M. (2018), Emotions and assessment: considerations for rater‐based judgements of entrustment. Med Educ, 52: 254-262. doi:10.1111/medu.13476

When are we going to stop thinking that assessment – of any kind – is objective? As soon as you’re making a decision (about what question to ask, the mode of response, the weighting of the item, etc.) you’re making a subjective choice about the signal you’re sending to students about what you value. If the student considers you to be a proxy of the profession/institution, then you’re subconsciously signalling the values of the profession/institution.

If you’re interested in the topic of subjectivity in assessment, you may be interested in two of our In Beta episodes:

We Need Transparency in Algorithms, But Too Much Can Backfire

The students had also been asked what grade they thought they would get, and it turned out that levels of trust in those students whose actual grades hit or exceeded that estimate were unaffected by transparency. But people whose expectations were violated – students who received lower scores than they expected – trusted the algorithm more when they got more of an explanation of how it worked. This was interesting for two reasons: it confirmed a human tendency to apply greater scrutiny to information when expectations are violated. And it showed that the distrust that might accompany negative or disappointing results can be alleviated if people believe that the underlying process is fair.

Source: We Need Transparency in Algorithms, But Too Much Can Backfire

This article uses the example of algorithmic grading of student work to discuss issues of trust and transparency. One of the findings I thought was a useful takeaway in this context is that full transparency may not be the goal, but that we should rather aim for medium transparency and only in situations where students’ expectations are not met. For example, a student who’s grade was lower than expected might need to be told something about how it was calculated. But when they got too much information it eroded trust in the algorithm completely. When students got the grade they expected then no transparency was needed at all i.e. they didn’t care how the grade was calculated.

For developers of algorithms, the article also provides a short summary of what explainable AI might look like. For example, without exposing the underlying source code, which in many cases is proprietary and holds commercial value for the company, explainable AI might simply identify the relationships between inputs and outcomes, highlight possible biases, and provide guidance that may help to address potential problems in the algorithm.

Robots in the classroom? Preparing for the automation of teaching | BERA

Agendas around AI and education have been dominated by technology designers and vendors, business interests and corporate reformers. There is a clear need for vigorous responses from educators, students, parents and other groups with a stake in public education. What do we all want from our education systems as AI-driven automation becomes more prominent across society?

Source: Robots in the classroom? Preparing for the automation of teaching | BERA

We need teachers, clinicians, and clinician educators involved in the process of designing, developing, implementing and evaluating AI-based systems in the higher education and clinical context. As long as the agenda for 21st century education and clinical care is driven by corporate interests (and how could it not, given the enormous commercial value of AI), it’s likely that those responsible for teaching the next generation of health professionals will be passive recipients of algorithmic decision-making rather than empowered participants in their design.

An introduction to artificial intelligence in clinical practice and education

Two weeks ago I presented some of my thoughts on the implications of AI and machine learning in clinical practice and health professions education at the 2018 SAAHE conference. Here are the slides I used (20 slides for 20 seconds each) with a very brief description of each slide. This presentation is based on a paper I submitted to OpenPhysio, called: “Artificial intelligence in clinical practice: Implications for physiotherapy education“.


The graph shows how traffic to a variety of news websites changed after Facebook made a change to their Newsfeed algorithm, highlighting the influence that algorithms have on the information presented to us, and how we no longer really make real choices about what to read. When algorithms are responsible for filtering what we see, they have power over what we learn about the world.


The graph shows the near flat line of social development and population growth until the invention of the steam engine. Before that all of the Big Ideas we came up with had relatively little impact on our physical well-being. If your grandfather spent his life pushing a plough there was an excellent chance that you’d spend your life pushing one too. But once we figured out how to augment our physical abilities with machines we saw significant advances in society and industry and an associated improvement in everyones quality of life.


The emergence of artificial intelligence in the form of narrowly constrained machine learning algorithms has demonstrated the potential for important advances in cognitive augmentation. Basically, we are starting to really figure out how to use computers to enhance our intelligence. However, we must remember that we’ve been augmenting our cognitive ability for a long time, from exporting our memories onto external devices, to performing advanced computation beyond the capacity of our brains.


The enthusiasm with which modern AI is being embraced is not new. The research and engineering aspects of artificial intelligence have been around since the 1950s, while fictional AI has an even longer history. The field has been through a series of highs and lows (called AI Winters). The developments during these cycles were fueled by what has become known as Good Old Fashioned AI; early attempts to explicitly design decision-making into algorithms by hard coding all possible variations of the interactions in a closed-environment. Understandably, these systems were brittle and unable to adapt to even small changes in context. This is one of the reasons that previous iterations of AI had little impact in the real world.


There are 3 main reasons why it’s different this time. The first is the emergence of cheap but powerful hardware (mainly central and graphics processing units), which has seen computational power growing by a factor of 10 every 4 years. The second characteristic is the exponential growth of data, and massive data sets are an important reason that modern AI approaches have been so successful. The graph in the middle column is showing data growth in Zettabytes (10 to the power of 21). At this rate of data growth we’ll run out metric system in a few years (Yotta is the only allocation after Zetta). The third characteristic of modern AI research is the emergence of vastly improved machine learning algorithms that are able to learn without being explicitly told what to learn. In the example here, the algorithm has coloured in the line drawings to create a pretty good photorealistic image, but without being taught any of the concepts i.e. human, face, colour, drawing, photo, etc.


We’re increasingly seeing evidence that in some very narrow domains of practice (e.g. reasoning and information recall), machine learning algorithms can outdiagnose experienced clinicians. It turns out that computers are really good at classifying patterns of variables that are present in very large datasets. And diagnosis is just a classification problem. For example, algorithms are very easily able to find sets of related signs and symptoms and put them into a box that we call “TB”. And increasingly, they are able to do this classification better than the best of us.


It is estimated that up to 60% of a doctors time is spent capturing information in the medical record. Natural language processing algorithms are able to “listen” to the ambient conversation between a doctor and patient, record the audio and transcribe it (translating it in the process if necessary). It then performs semantic analysis of the text (not just keyword analysis) to extract meaningful information which it can use to populate an electronic health record. While the technology is in a very early phase and not yet safe for real world application it’s important to remember that this is the worst it’s ever going to be. Even if we reach some kind of technological dead end with respect to machine learning and from now on we only increase efficiency, we are still looking at a transformational technology.


An algorithm recently passed the Chinese national medical exam, qualifying (in theory) as a physician. While we can argue that practising as a physician is more than writing a text-based exam, it’s hard not to acknowledge the fact that – at the very least – machines are becoming more capable in the domains of knowledge and reasoning that characterise much of clinical practice. Again, this is the worst that this technology is ever going to be.


This graph shows the number of AI applications under development in a variety of disciplines, including medicine (on the far right). The green segment shows the number of applications where AI is outperforming human beings. Orange segments show the number of applications that are performing relatively well, with blue highlighting areas that need work. There are two other points worth noting: medical AI is the area of research that is clearly showing the most significant advances (maybe because it’s the area where companies can make the most money); and all the way at the far left of the graph is education, showing that there may be some time before algorithms are showing the same progress in teaching.


Contrary to what we see in the mainstream media, AI is not a monolithic field of research; it consists it consists of a wide variety of different technologies and philosophies that are each sometimes referred to under the more general heading of “AI”. While much of the current progress is driven by machine learning algorithms (which is itself driven by the 3 characteristics of modern society highlighted earlier), there are many areas of development, each of which can potentially contribute to different areas of clinical practice. For the purposes of this presentation, we can define AI as any process that is able to independently achieve an objective within a narrowly constrained domain of interest (although the constraints are becoming looser by the day).


Machine learning is a sub-domain of AI research that works by exposing an algorithm to a massive data set and asking it to look for patterns. By comparing what it finds to human-tagged patterns in the data, developers can fine-tune the algorithm (i.e. “teach it) before exposing it to untagged data and seeing how well it performs relative to the training set. This generally describes the “learning” process of machine learning. Deep learning is a sub-domain of machine learning that works by passing data through many layers, allocating different weights to the data at each layer, thereby coming up with a statistical “answer” that expresses an outcome in terms of probability. Deep learning neural networks underlie many of the advances in modern AI research.


Because machine and deep learning algorithms are trained on (biased) human-generated datasets, it’s easy to see how the algorithms themselves will have an inherent bias embedded in the outputs. The Twitter screenshot shows one of the least offensive tweets from Tay, an AI-enabled chatbot created by Microsoft, which learned from human interactions on Twitter. In the space of a few hours, Tay became a racist, sexist, homophobic monster – because this is what it learned from how we behave on Twitter. This is more of an indictment of human beings than it is of the algorithm. The other concern with neural networks is that, because of the complexity of the algorithms and the number of variables being processed, human beings are unable to comprehend how the output was computed. This has important implications when algorithms are helping with clinical decision-making and is the reason that resources are being allocated to the development of what is known as “explainable AI”.


As a result of the changes emerging from AI-based technologies in clinical practice we will soon need to stop thinking of our roles in terms of “professions” and rather in terms of “tasks”. This matters because increasingly, many of the tasks we associate with our professional roles will be automated. This is not all bad news though, because it seems probable that increased automation of the repetitive tasks in our repertoire will free us up to take on more meaningful tasks, for example, having more time to interact with patients. We need to start asking what are the things that computers are better at and start allocating those tasks to them. Of course, we will need to define what we mean by “better”; more efficient, more cost-effective, faster, etc.


Another important change that will require the use of AI-based technologies in clinical practice will be the inability of clinicians to manage – let alone understand – the vast amount of information being generated by, and from, patients. Not only are all institutional tests and scans digital but increasingly, patients are creating their own data via wearables – and soon, ingestibles – all of which will require that clinicians are able to collect, filter, analyse and interpret these vast streams of information. There is evidence that, without the help of AI-based systems, clinicians simply will not have the cognitive capacity to understand their patients’ data.


The impact of more patient-generated health data is that we will see patients being in control of their data, which will exist on a variety of platforms (cloud storage, personal devices, etc.), none of which will be available to the clinician by default. This means that power will move to the patient as they make choices about who to allow access to their data in order to help them understand it. Clinicians will need to come to terms with the fact that they will no longer wield the power in the relationship and in fact, may need to work within newly constituted care teams that include data scientists, software engineers, UI designers and smart machines. And all of these interactions will be managed by the patient who will likely be making choices with inputs from algorithms.


The incentives for enthusiastic claims around developments in AI-based clinical systems are significant; this is an acdemic land grab the likes of which we have only rarely experienced. The scale of the funding involved puts pressure on researchers to exaggerate claims in order to be the first to every important milestone. This means that clinicians will need to become conversant with the research methods and philosophies of the data scientists who are publishing the most cutting edge research in the medical field. The time will soon come when it will be difficult to understand the language of healthcare without first understanding the language of computer science.


The implications for health professions educators are profound, as we will need to start asking ourselves what we are preparing our graduates for. When clinical practice is enacted in an intelligent environment and clinicians are only one of many nodes in vast information networks, what knowledge and skills do they need to thrive? When machines outperform human beings in knowledge and reasoning tasks, what is the value of teaching students about disease progression, for example? We may find ourselves graduating clinicians who are well-trained, competent and irrelevant. It is not unreasonable to think that the profession called “doctor” will not exist in 25 years time, having been superseded by a collective of algorithms and 3rd party service providers who provide more fine-grained services at a lower cost.


There are three new literacies that health professions educators will need to begin integrating into our undergraduate curricula. Data literacy, so that healthcare graduates will understand how to manage, filter, analyse and interpret massive sets of information in real-time; Technological literacy, as more and more of healthcare is enacted in digital spaces and mediated by digital devices and systems; and Human literacy, so that we can become better at developing the skillsets necessary to interact more meaningfully with patients.


There is evidence to suggest that, while AI-based systems outperform human beings on many of the knowledge and reasoning tasks that make up clinical practice, the combination of AI and human originality results in the most improved outcomes of all. In other words, we may find that patient outcomes are best when we figure out how to combine human creativity and emotional response with machine-based computation.


And just when we’re thinking that “creativity” and “originality” are the sole province of human beings, we’re reminded that AI-based systems are making progress in those areas as well. It may be that the only way to remain relevant in a constantly changing world is to develop our ability to keep learning.