In the middle ages, cities could spend more than 100 years building a cathedral while at the same time believing that the apocalypse was imminent. They must’ve had a remarkable conviction that commissioning these projects would guarantee them eternal salvation. Compare this to the way we think about planning and design today where, for example, we don’t think more than 3 years into the future simply because that would fall outside of this organisational election cycle. Sometimes it feels like the bulk of the work that a politician does today is to secure the funding that will get them re-elected tomorrow. Where do we see real-world examples of long-term planning that will help guide our decision-making in the present?
A few days ago I spent some time preparing feedback on a draft of the HPCSA minimum requirements for physiotherapy training in South Africa and one of the things that struck me was how much of it was just more-of-the-same. This document is going to inform physiotherapy education and practice for at least the next decade and there was no mention of advances at the cutting edge of medical science and the massive impact that emerging technologies are going to have on clinical practice. Genetic engineering, nanotechnology, artificial intelligence and robotics are starting to drive significant changes in healthcare and it seems that, as a profession, we’re largely oblivious to what’s coming. It’s dawned on me that we have no real plan for the future of physiotherapy (the closest I’ve seen is Dave Nicholls new book, called ironically, The End of Physiotherapy).
What would a good plan look like? In the interests of time, I’m just going to take the high-level suggestions from this article on how the US could improve their planning for AI development and make a short comment on each (I’ve expanded on some of these ideas in my OpenPhysio article on the same topic).
Invest more: Fund research into practice innovations that take into account the social, economic, ethical and clinical implications of emerging technologies. Breakthroughs in how we can best utilise emerging technologies as core aspects of physiotherapy practice will come through funded research programmes in universities, especially in the early stages of innovation. We need to take the long-term view that, even if robotics, for example, isn’t having a big impact on physiotherapy today, one day we’ll see things like percussion and massage simply go away. We will also need to fund research on what aspects of the care we provide are really valued by patients (and what they, and funders, will pay for).
Prepare for job losses: From the article: “While [emerging technologies] can drive economic growth, it may also accelerate the eradication of some occupations, transform the nature of work in other jobs, and exacerbate economic inequality.” For example, self-driving cars are going to massively drive down the injuries that occur as a result of MVAs. Orthopaedic-related physiotherapy work is, therefore, going to dry up as the patient pool gets smaller. Preventative, personalised medicine will likewise result in dramatic reductions in the incidence of chronic conditions of lifestyle. The “education” component of practice will be outsourced to apps. Even if physiotherapy jobs are not entirely lost, they will certainly be transformed unless we start thinking of how our practice can evolve.
Nurture talent: We will need to ensure that we retain and recapture interest in the profession. I’m not sure about other countries but in South Africa, we have a relatively high attrition rate in physiotherapy after a few years of clinical work. The employment prospects and long-term career options, especially in the public health system, are quite poor and many talented physiotherapists leave because they’re bored or frustrated. I recently saw a post on LinkedIn where one of our most promising graduates from 5 years ago is now a property developer. After 4 years of intense study and commitment, and 3 years of clinical practice, he just decided that physiotherapy isn’t where he sees his long-term future. He and many others who have left health care practice represent a deep loss for the profession.
Prioritise education: At the undergraduate level we should re-evaluate the curriculum and ensure that it is fit for purpose in the 21st century. How much of our current programmes are concerned with the impact of robotics, nanotechnology, genetic engineering and artificial intelligence? We will need to create space for in-depth development within physiotherapy but also ensure development across disciplines (the so-called T-shaped graduate). Continuing professional development will become increasingly important as more aspects of professional work change and over time, are eradicated. Those who cannot (or will not) continue learning are unlikely to have meaningful long-term careers.
Guide regulation: At the moment, progress in emerging technologies is being driven by startups who are funded with venture-capital and whose primary goal is rapid growth to fuel increasing valuations. This ecosystem doesn’t encourage entrepreneurs to limit risks and instead pushes them to “move fast and break things”, which isn’t exactly aligned with the medical imperative to “first do no harm”. Health professionals will need to ensure that technologies that are introduced into clinical practice are first and foremost serving the interests of patients, rather than driving up the value of medical technology startups. If we are not actively involved in regulating these technologies, we are likely to find our practice subject to them.
Understand the technology: In order to engage with any of the previous items in the list, we will first need to understand the technologies involved. For example, if you don’t know how the methods of data gathering and analysis can lead to biased algorithmic decision-making, will you be able to argue for why your patient’s health insurance funder shouldn’t make decisions about what interventions you need to provide? We need to ensure that we are not only specialists in clinical practice, but also specialists in how technology will influence clinical practice.
Each of the items in the list above is only very briefly covered here, and each could be the foundation for PhD-level programmes of research. If you’re interested in the future of the profession (and by that I mean you’re someone who wonders what health professional practice will look like in 100 years), I’d love to hear your thoughts. Do you know of anyone who has started building our cathedrals?
Google offers an option to download all of the data it stores about you. I’ve requested to download it and the file is 5.5GB big, which is roughly 3m Word documents. This link includes your bookmarks, emails, contacts, your Google Drive files, all of the above information, your YouTube videos, the photos you’ve taken on your phone, the businesses you’ve bought from, the products you’ve bought through Google.
They also have data from your calendar, your Google hangout sessions, your location history, the music you listen to, the Google books you’ve purchased, the Google groups you’re in, the websites you’ve created, the phones you’ve owned, the pages you’ve shared, how many steps you walk in a day…
I’ve been thinking about all the reasons that support my decision to move as much of my digital life as possible into platforms and services that give me more control over how my personal data is used. Posts like this are really just reminders for me to remember what to include, and why I’m doing this. It’s not easy to move away from Google, Facebook, Amazon, Apple and Twitter but it may just be worth it.
A good question to ask yourself when evaluating your apps is “why does this app exist?” If it exists because it costs money to buy, or because it’s the free app extension of a service that costs money, then it is more likely to be able to sustain itself without harvesting and selling your data. If it’s a free app that exists for the sole purpose of amassing a large amount of users, then chances are it has been monetized by selling data to advertisers.
This is a useful heuristic for making quick decisions about whether or not you should have that app installed on your phone. Another good rule of thumb: “If you’re not paying for the product then you are the product.” Your personal data is worth a lot to companies who are either going to use it to refine their own AI-based platforms (e.g. Google, Facebook, Twitter, etc.) or who will sell your (supposedly anonymised) data to those companies. This is how things work now…you give them your data (connections, preferences, brand loyalty, relationships, etc.) and they give you a service “for free”. But as we’re seeing more and more, it really isn’t free. This is especially concerning when you realise how often your device and apps are “phoning home” with reports about you and your usage patterns, sometimes as frequently as every 2 seconds.
On a related note, if you’re interested in a potential technical solution to this problem you may want to check out Solid (social linked data) by Tim Berners-Lee, which will allow you to maintain control of your personal information but still share it with 3rd parties under conditions that you specify.
The conversation starts with a basic overview of how the eye works, which is fascinating in itself, but then they start talking about how they’ve figured out how to insert an external (digital) process into the interface between the eye and brain, and that’s when things get crazy.
It’s not always easy to see the implications of converting physical processes into software but this is one of those conversations that really makes it simple to see. When we use software to mediate the information that the brain receives, we’re able to manipulate that information in many different ways. For example, with this system in place, you could see wavelengths of light that are invisible to the unaided eye. Imagine being able to see in the infrared or ultraviolet spectrum. But it gets even crazier.
It turns out we have cells in the interface between the brain and eye that are capable of processing different kinds of visual information (for example, reading text and evaluating movement). When both types of cell receives information meant for the other at once, we find it really hard to process both simultaneously. But, if software could divert the different kinds of information directly to the cells responsible for processing it, we could do things like read text while driving. The brain wouldn’t be confused because the information isn’t coming via the eyes at all and so the different streams are processed as two separate channels.
If an inferior product/technology/way of doing things can sometimes “lock in” the market, does that make network effects more about luck, or strategy? It’s not really locked in though, since over and over again the next big thing comes along. So what does that mean for companies and industries that want to make the new technology shift? And where does competitive advantage even come from when everyone has access to the same building blocks of innovation?
This is a wide-ranging conversation with W. Brian Arthur, Marc Andreessen, and Sonal Chokshion the history of technology (mainly in Silicon Valley) and the subsequent impact on society. If you’re interested in technology in a general sense, rather than specific applications or platforms, then this is a great conversation that gets into the deeper implications of technology at a fundamental level.
Action 1: We are shutting down Google+ for consumers.
This review crystallized what we’ve known for a while: that while our engineering teams have put a lot of effort and dedication into building Google+ over the years, it has not achieved broad consumer or developer adoption, and has seen limited user interaction with apps. The consumer version of Google+ currently has low usage and engagement: 90 percent of Google+ user sessions are less than five seconds.
I don’t think it’s a surprise to anyone that Google+ wasn’t a big hit although I am surprised that they’ve taken the step to shut it down for consumers. And this is the problem with online communities in general; when the decision is made that they’re not cost-effective, they’re shut down regardless of the value they create for community members.
When Ben and I started In Beta last year we decided to use Google+ for our community announcements and have been pretty happy with what we’ve been able to achieve with it. The community has grown to almost 100 members and, while we don’t see much engagement or interaction, that’s not why we started using it. For us, it was to make announcements about planning for upcoming episodes and since we didn’t have a dedicated online space, it made sense to use something that already existed. Now that Google+ is being sunsetted we’ll need to figure out another place to set up the community.
Any high-quality speech-to-text engines require thousands of hours of voice data to train them, but publicly available voice data is very limited and the cost of commercial datasets is exorbitant. This prompted the question, how might we collect large quantities of voice data for Open Source machine learning?
One of the big problems with the development of AI is that few organisations have the large, inclusive, diverse datasets that are necessary to reduce the inherent bias in algorithmic training. Mozilla’s Common Voice project is an attempt to create a large, multilanguage dataset of human voices with which to train natural language AI.
This is why we built Common Voice. To tell the story of voice data and how it relates to the need for diversity and inclusivity in speech technology. To better enable this storytelling, we created a robot that users on our website would “teach” to understand human speech by speaking to it through reading sentences.
I think that voice and audio is probably going to be the next compter-user interface so this is an important project to support if we want to make sure that Google, Facebook, Baidu and Tencent don’t have a monopoly on natural language processing. I see this project existing on the same continuum as OpenAI, which aims to ensure that “…AGI’s benefits are as widely and evenly distributed as possible.” Whatever you think about the possibility of AGI arriving anytime soon, I think it’s a good thing that people are working to ensure that the benefits of AI aren’t mediated by a few gatekeepers whose primary function is to increase shareholder value.
Most of the data used by large companies isn’t available to the majority of people. We think that stifles innovation. So we’ve launched Common Voice, a project to help make voice recognition open and accessible to everyone. Now you can donate your voice to help us build an open-source voice database that anyone can use to make innovative apps for devices and the web. Read a sentence to help machines learn how real people speak. Check the work of other contributors to improve the quality. It’s that simple!
The datasets are openly licensed and available for anyone to download and use, alongside other open language datasets that Mozilla links to on the page. This is an important project that everyone should consider contributing to. The interface is intuitive and makes it very easy to either submit your own voice or to validate the recordings that other people have made. Why not give it a go?
You didn’t need to know about how to print on a printing press in order to read a printed book. Writing implements were readily available in various forms in order to record thoughts, as well as communicate with them. The use was simple requiring nothing more than penmanship. The rapid advancement of technology has changed this. Tech has evolved so quickly and so universally in our culture that there is now literacy required in order for people to effectively and efficiently use it.
Reading and writing as a literacy was hard enough for many of us, and now we are seeing that there is a whole new literacy that needs to be not only learned, but taught by us as well.
I wrote about the need to develop these new literacies in a recent article (under review) in OpenPhysio. From the article:
As clinicians become single nodes (and not even the most important nodes) within information networks, they will need data literacy to read, analyse, interpret and make use of vast data sets. As they find themselves having to work more collaboratively with AI-based systems, they will need the technological literacy that enables them to understand the vocabulary of computer science and engineering that enables them to communicate with machines. Failing that, we may find that clinicians will simply be messengers and technicians carrying out the instructions provided by algorithms.
It really does seem like we’re moving towards a society in which the successful use of technology is, at least to some extent, premised on your understanding of how it works. As educators, it is incumbent on us to 1) know how the technology works so that we can 2) help students use it effectively while at the same time avoid exploitation by for-profit companies.
See also: Aoun, J. (2017). Robot proof: Higher Education in the Age of Artificial Intelligence. MIT Press.
Our search engines tried to impose structure and find relationships using mainly unintentional clues. You therefore couldn’t rely on them to find everything that would be of help, and not because the information space was too large. Rather, it was because the space was created by us slovenly humans.
Interesting article on how search algorithms have changed as the web has grown in scale. In the beginning, we got results that were determined by precision and recall (although optimising for one meant reducing the importance of the other). Then relevance became necessary to include as the number of possible results became too large i.e. when you have 100 000 articles that match the topic, the search engine must decide how to rank them for you. Over time, interestingness was another concept that was built into the algorithm; it’s not just that the results should be accurate and relevant, but they should be interesting too.
Currently, there’s interest in serendipity, where search engines return results that are slightly different to what you’re looking for and may serve to provide an alternative point of view (but not so different that you ignore it) and so avoid the filter bubble. As we move forward, we may also begin seeing calls for an increase in the truthfulness of results (which may reasonably be called quality). As I said, it’s an interesting article that covers a lot with respect to how search engines work, and it useful for anyone who has ever told someone to “just Google it”.