In this episode of the podcast, Sam Harris speaks with Fred Kaplan about the ever-present threat of nuclear war. They discuss the history of nuclear deterrence, U.S. first-strike policy, preventive war, limited nuclear war, tactical vs. strategic weapons, Trump’s beliefs about nuclear weapons, the details of command and control, and other topics.
Harris, S. & Kaplan, F. (2020). The Bomb. Making sense podcast.
I think it’s fair to say that I’m quite interested in the existential risk posed by nuclear weapons, as it’s a topic that’s well-covered by two sources that I listen to and read a lot: the Future of Life Institutesection on nuclear weapons, and 80 000 hours on nuclear security. Obviously I’m not an expert but I have found myself covering a fair amount of mainstream content on the threat of nuclear war and subsequent challenges we’d face as a species.
But I was still surprised at being confronted with how unconcerned I am in the face of these risks. This episode of the Making Sense podcast really emphasises the insanity of how we’ve become oddly comfortable with the fact that there are two countries who are constantly on the brink of annihilating a significant percentage of people on earth, and sending everyone else back to the stone age. How is it possible that the rest of us haven’t stopped and asked, “Hang on. That doesn’t seem reasonable.”
If you’re haven’t spent much time exploring the existential risk posed by the existence of nuclear weapons, this is a podcast well worth listening to.
Zotero is a free and open-source reference management software to manage bibliographic data and related research materials (such as PDF files). Notable features include web browser integration, online syncing, generation of in-text citations, footnotes, and bibliographies, as well as integration with the word processors Microsoft Word, LibreOffice Writer, and Google Docs. It is produced by the Center for History and New Media at George Mason University.
Wikipedia contributors. (2020, January 8). Zotero. In Wikipedia, The Free Encyclopedia.
Now that Mendeley is encrypting all of your libraries on your own computer, it might be worth looking for an alternative reference manager. Zotero has everything that you’d expect from a reference manager:
Importing of all kinds of resources (not just PDFs) via a browser plugin.
Automated extraction of resource metadata during import.
Notes and tags for resources.
Exporting of libraries in multiple formats.
Citation management in MS Word, Google Docs, and LibreOffice Writer.
Cross-platform (i.e. it runs on different operating systems) with the ability to sync between devices.
A browser-based version of your library that you can access when you’re not at your computer.
In addition to the standard features listed above, Zotero also has the following:
It’s open-source, which means that you’ll always have a version available for you to use, regardless of what happens to the current developers.
A plugin database that enables developers to create custom features that most users probably won’t need but which might be valuable for some.
It supports more than 30 languages.
Ability to create relationships between resources.
The developers are always working to figure out how to make your life easier as an academic and researcher (see Tweet below).
Here is a more comprehensive overview of what Zotero offers (including some of the main differences with competing software), here’s the blog where you can stay updated with development of the programme, and the Wikipedia page with some additional background and context.
If you use Mendeley, Paperpile, Endnote or any other reference manager and aren’t quite happy with any aspect of it, you might consider giving Zotero a go.
Note: This is a new experiment on the blog where I’ll share some of the open-source software that I use. Partly because I believe in the idealogy that drives open-source project development but mostly because I actually think that the open-source alternatives are better and would love for more people to use them.
In this anxious era of bullying, teen depression, and school shootings, tech companies are selling software to schools and parents that make big promises about keeping kids secure by monitoring what they say and write online. But these apps demand disturbing trade-offs in the name of safety.
This is a great episode of the Rework podcast looking at the dangers of using increasingly sophisticated technology in schools as part of programmes to “protect” children. What they really amount to are very superficial surveillance systems that can do a lot less than what the venture-backed companies say they can. If you’re a teacher or if you have kids at a school using these systems, this is a topic worth learning more about.
The show notes include a ton of links to excellent resources and also a complete transcript of the episode.
This study gives examples for implementing technology-facilitated approaches and provides the following recommendations for conducting such longitudinal, sensor-based research, with both environmental and wearable sensors in a health care setting: pilot test sensors and software early and often; build trust with key stakeholders and with potential participants who may be wary of sensor-based data collection and concerned about privacy; generate excitement for novel, new technology during recruitment; monitor incoming sensor data to troubleshoot sensor issues; and consider the logistical constraints of sensor-based research.
We’re going to be seeing more and more of this type of research in healthcare organisations, which I think is a good thing, given the following caveats (I’m sure that there are many more):
We still need to be critical about how sensors record data, what kind of data they record, and what kinds of questions are prioritised with this type of research.
Knowing more about how bodies work at the physiological level doesn’t say anything about the social, political, ethical, etc. factors that are responsible for the bigger health issues of our time e.g. chronic diseases of life.
Behaviour can be tracked but the underlying beliefs that drive behaviour are still opaque. We need to be careful not to confuse behaviour with reasons for that behaviour.
The reason I think that sensor-based research is, in general, a good thing is because the questions that you’re likely to ask in these kinds of studies are the same questions that we currently use observation and participant self-report to answer. We know that these forms of data collection are inherently unreliable so it’s interesting to see people trying to address this.
However, even assuming that sensor-based studies are more reliable (and we would first need to ask, reliable against what outcomes?), having more reliable data says little about whether the questions and corresponding data are valid. In other words, we need to be careful that that date being collected is appropriate for answering the types questions we’re asking.
Finally, it stands to reason that once we have the data on the behaviour (the easy part) we still need to do the hard research that gets at the underlying reasons for why people behave in the way that they do. Simply knowing that people tend to do X is only the first step. Understanding why they do X and not Y is another step (possibly determined by interviews for FGDs), and then presumably trying to get them to change their behaviour may be the hardest part of all.
The top 10 in demand jobs in 2010 did not exist in 2004. We are currently preparing students for jobs that don’t exist yet, using technologies that haven’t been invented, in order to solve problems we don’t even know are problems yet.
It takes some work to find out that the claim is not true.
If you’ve spent any time in education there’s a good chance you’ve seen the Shift Happens video below (this is the original version that came out in 2009 or thereabouts…there are updated versions for 2018 and 2019). It’s very inspiring (the music helps) and for the longest time I’d recommend it to anyone who’d listen. If you haven’t seen the video then watch it now before we move on.
I’ve watched this video a lot, mainly in the first few years after starting as an academic because the narrative was perfectly aligned with the way I was thinking and the work I was doing. But as I’ve spent more time in education and research, I’ve become increasingly skeptical of the “sound bite” type solutions to pedagogical problems that are nuanced and complex. Having said that I’d say that, until earlier this year, I would still have been sympathetic to the main arguments in the video:
The rate of social and technical change is accelerating;
Because of the Internet and other emerging technologies;
Higher education is not adapting quickly enough;
But we need to future-proof our students;
So we’d better start changing soon.
In this More or less BBC podcast, Tim Harford asks what the staistical likelihood is that 65% of future jobs haven’t been invented yet and it seems fairly obvious straight away that it’s not a reasonable prediction. We might argue that the specific numbers are less important than the spirit of the claim, which is that the world is changing more quickly then ever before (probably true), and that this matters at a fundamental level (maybe true), and that how we respond in higher education has grave consequences for our students we train (little or no evidence that this is true). Consider the following quote from a presentation give in 1957:
We are too much inclined to think of careers and opportunities as if the oncoming generations were growing up to fill the jobs that are now held by their seniors. This is not true. Our young people will fill many jobs that do not now exist. They will invent products that will need new skills. Old-fashioned mercantilism and the nineteenth-century theory in which one man’s gain was another man’s loss, are being replaced by a dynamism in which the new ideas of a lot of people become the gains for many, many more.
Josephs, D. (1957). Oral presentation at the Conference on the American High School.
Notice 1) this statement is from a keynote given about 60 years ago, and 2) how closely the narrative mirrors the concerns raised about how contemporary education doesn’t prepare students for jobs that don’t yet exist. While it may be fair to say that the narrative might still be true, just on a longer timescale, it’s almost certainly not a result of the Internet, mobile phones or any other technology that’s emerged in the past few decades.
This is why I was delighted to come across the article I opened with. It’s a reminder that it’s essential that we take critical positions on the things we care most about.
In the middle ages, cities could spend more than 100 years building a cathedral while at the same time believing that the apocalypse was imminent. They must’ve had a remarkable conviction that commissioning these projects would guarantee them eternal salvation. Compare this to the way we think about planning and design today where, for example, we don’t think more than 3 years into the future simply because that would fall outside of this organisational election cycle. Sometimes it feels like the bulk of the work that a politician does today is to secure the funding that will get them re-elected tomorrow. Where do we see real-world examples of long-term planning that will help guide our decision-making in the present?
A few days ago I spent some time preparing feedback on a draft of the HPCSA minimum requirements for physiotherapy training in South Africa and one of the things that struck me was how much of it was just more-of-the-same. This document is going to inform physiotherapy education and practice for at least the next decade and there was no mention of advances at the cutting edge of medical science and the massive impact that emerging technologies are going to have on clinical practice. Genetic engineering, nanotechnology, artificial intelligence and robotics are starting to drive significant changes in healthcare and it seems that, as a profession, we’re largely oblivious to what’s coming. It’s dawned on me that we have no real plan for the future of physiotherapy (the closest I’ve seen is Dave Nicholls new book, called ironically, The End of Physiotherapy).
What would a good plan look like? In the interests of time, I’m just going to take the high-level suggestions from this article on how the US could improve their planning for AI development and make a short comment on each (I’ve expanded on some of these ideas in my OpenPhysio article on the same topic).
Invest more: Fund research into practice innovations that take into account the social, economic, ethical and clinical implications of emerging technologies. Breakthroughs in how we can best utilise emerging technologies as core aspects of physiotherapy practice will come through funded research programmes in universities, especially in the early stages of innovation. We need to take the long-term view that, even if robotics, for example, isn’t having a big impact on physiotherapy today, one day we’ll see things like percussion and massage simply go away. We will also need to fund research on what aspects of the care we provide are really valued by patients (and what they, and funders, will pay for).
Prepare for job losses: From the article: “While [emerging technologies] can drive economic growth, it may also accelerate the eradication of some occupations, transform the nature of work in other jobs, and exacerbate economic inequality.” For example, self-driving cars are going to massively drive down the injuries that occur as a result of MVAs. Orthopaedic-related physiotherapy work is, therefore, going to dry up as the patient pool gets smaller. Preventative, personalised medicine will likewise result in dramatic reductions in the incidence of chronic conditions of lifestyle. The “education” component of practice will be outsourced to apps. Even if physiotherapy jobs are not entirely lost, they will certainly be transformed unless we start thinking of how our practice can evolve.
Nurture talent: We will need to ensure that we retain and recapture interest in the profession. I’m not sure about other countries but in South Africa, we have a relatively high attrition rate in physiotherapy after a few years of clinical work. The employment prospects and long-term career options, especially in the public health system, are quite poor and many talented physiotherapists leave because they’re bored or frustrated. I recently saw a post on LinkedIn where one of our most promising graduates from 5 years ago is now a property developer. After 4 years of intense study and commitment, and 3 years of clinical practice, he just decided that physiotherapy isn’t where he sees his long-term future. He and many others who have left health care practice represent a deep loss for the profession.
Prioritise education: At the undergraduate level we should re-evaluate the curriculum and ensure that it is fit for purpose in the 21st century. How much of our current programmes are concerned with the impact of robotics, nanotechnology, genetic engineering and artificial intelligence? We will need to create space for in-depth development within physiotherapy but also ensure development across disciplines (the so-called T-shaped graduate). Continuing professional development will become increasingly important as more aspects of professional work change and over time, are eradicated. Those who cannot (or will not) continue learning are unlikely to have meaningful long-term careers.
Guide regulation: At the moment, progress in emerging technologies is being driven by startups who are funded with venture-capital and whose primary goal is rapid growth to fuel increasing valuations. This ecosystem doesn’t encourage entrepreneurs to limit risks and instead pushes them to “move fast and break things”, which isn’t exactly aligned with the medical imperative to “first do no harm”. Health professionals will need to ensure that technologies that are introduced into clinical practice are first and foremost serving the interests of patients, rather than driving up the value of medical technology startups. If we are not actively involved in regulating these technologies, we are likely to find our practice subject to them.
Understand the technology: In order to engage with any of the previous items in the list, we will first need to understand the technologies involved. For example, if you don’t know how the methods of data gathering and analysis can lead to biased algorithmic decision-making, will you be able to argue for why your patient’s health insurance funder shouldn’t make decisions about what interventions you need to provide? We need to ensure that we are not only specialists in clinical practice, but also specialists in how technology will influence clinical practice.
Each of the items in the list above is only very briefly covered here, and each could be the foundation for PhD-level programmes of research. If you’re interested in the future of the profession (and by that I mean you’re someone who wonders what health professional practice will look like in 100 years), I’d love to hear your thoughts. Do you know of anyone who has started building our cathedrals?
Google offers an option to download all of the data it stores about you. I’ve requested to download it and the file is 5.5GB big, which is roughly 3m Word documents. This link includes your bookmarks, emails, contacts, your Google Drive files, all of the above information, your YouTube videos, the photos you’ve taken on your phone, the businesses you’ve bought from, the products you’ve bought through Google.
They also have data from your calendar, your Google hangout sessions, your location history, the music you listen to, the Google books you’ve purchased, the Google groups you’re in, the websites you’ve created, the phones you’ve owned, the pages you’ve shared, how many steps you walk in a day…
I’ve been thinking about all the reasons that support my decision to move as much of my digital life as possible into platforms and services that give me more control over how my personal data is used. Posts like this are really just reminders for me to remember what to include, and why I’m doing this. It’s not easy to move away from Google, Facebook, Amazon, Apple and Twitter but it may just be worth it.
A good question to ask yourself when evaluating your apps is “why does this app exist?” If it exists because it costs money to buy, or because it’s the free app extension of a service that costs money, then it is more likely to be able to sustain itself without harvesting and selling your data. If it’s a free app that exists for the sole purpose of amassing a large amount of users, then chances are it has been monetized by selling data to advertisers.
This is a useful heuristic for making quick decisions about whether or not you should have that app installed on your phone. Another good rule of thumb: “If you’re not paying for the product then you are the product.” Your personal data is worth a lot to companies who are either going to use it to refine their own AI-based platforms (e.g. Google, Facebook, Twitter, etc.) or who will sell your (supposedly anonymised) data to those companies. This is how things work now…you give them your data (connections, preferences, brand loyalty, relationships, etc.) and they give you a service “for free”. But as we’re seeing more and more, it really isn’t free. This is especially concerning when you realise how often your device and apps are “phoning home” with reports about you and your usage patterns, sometimes as frequently as every 2 seconds.
On a related note, if you’re interested in a potential technical solution to this problem you may want to check out Solid (social linked data) by Tim Berners-Lee, which will allow you to maintain control of your personal information but still share it with 3rd parties under conditions that you specify.
The conversation starts with a basic overview of how the eye works, which is fascinating in itself, but then they start talking about how they’ve figured out how to insert an external (digital) process into the interface between the eye and brain, and that’s when things get crazy.
It’s not always easy to see the implications of converting physical processes into software but this is one of those conversations that really makes it simple to see. When we use software to mediate the information that the brain receives, we’re able to manipulate that information in many different ways. For example, with this system in place, you could see wavelengths of light that are invisible to the unaided eye. Imagine being able to see in the infrared or ultraviolet spectrum. But it gets even crazier.
It turns out we have cells in the interface between the brain and eye that are capable of processing different kinds of visual information (for example, reading text and evaluating movement). When both types of cell receives information meant for the other at once, we find it really hard to process both simultaneously. But, if software could divert the different kinds of information directly to the cells responsible for processing it, we could do things like read text while driving. The brain wouldn’t be confused because the information isn’t coming via the eyes at all and so the different streams are processed as two separate channels.