Delete All Your Apps

A good question to ask yourself when evaluating your apps is “why does this app exist?” If it exists because it costs money to buy, or because it’s the free app extension of a service that costs money, then it is more likely to be able to sustain itself without harvesting and selling your data. If it’s a free app that exists for the sole purpose of amassing a large amount of users, then chances are it has been monetized by selling data to advertisers.

Koebler, J. (2018). Delete all your apps.

This is a useful heuristic for making quick decisions about whether or not you should have that app installed on your phone. Another good rule of thumb: “If you’re not paying for the product then you are the product.” Your personal data is worth a lot to companies who are either going to use it to refine their own AI-based platforms (e.g. Google, Facebook, Twitter, etc.) or who will sell your (supposedly anonymised) data to those companies. This is how things work now…you give them your data (connections, preferences, brand loyalty, relationships, etc.) and they give you a service “for free”. But as we’re seeing more and more, it really isn’t free. This is especially concerning when you realise how often your device and apps are “phoning home” with reports about you and your usage patterns, sometimes as frequently as every 2 seconds.

On a related note, if you’re interested in a potential technical solution to this problem you may want to check out Solid (social linked data) by Tim Berners-Lee, which will allow you to maintain control of your personal information but still share it with 3rd parties under conditions that you specify.


Split learning for health: Distributed deep learning without sharing raw patient data

Can health entities collaboratively train deep learning models without sharing sensitive raw data? This paper proposes several configurations of a distributed deep learning method called SplitNN to facilitate such collaborations. SplitNN does not share raw data or model details with collaborating institutions. The proposed configurations of splitNN cater to practical settings of i) entities holding different modalities of patient data, ii) centralized and local health entities collaborating on multiple task

Source: [1812.00564] Split learning for health: Distributed deep learning without sharing raw patient data

The paper describes how algorithm design (including training) can be shared across different organisations without each having access to each other’s resources.

This has important implications for the development of AI-based health applications, in that hospitals and other service providers need not share raw patient data with companies like Google/DeepMind. Health organisations could do the basic algorithm design in-house with the smaller, local data sets and then send the algorithm to organisations that have the massive data sets necessary for refining the algorithm, all without exposing the initial data and protecting patient privacy.

First compute no harm

Is it acceptable for algorithms today, or an AGI in a decade’s time, to suggest withdrawal of aggressive care and so hasten death? Or alternatively, should it recommend persistence with futile care? The notion of “doing no harm” is stretched further when an AI must choose between patient and societal benefit. We thus need to develop broad principles to govern the design, creation, and use of AI in healthcare. These principles should encompass the three domains of technology, its users, and the way in which both interact in the (socio-technical) health system.

Source: Enrico Coiera et al. (2017). First compute no harm. BMJ Opinion.

The article goes on to list some of the guiding principles for the development of AI in healthcare, including the following:

  • AI must be designed and built to meet safety standards that ensure it is fit for purpose and operates as intended.
  • AI must be designed for the needs of those who will work with it, and fit their workflows.
  • Humans must have the right to challenge an AI’s decision if they believe it to be in error.
  • Humans should not direct AIs to perform beyond the bounds of their design or delegated authority.
  • Humans should recognize that their own performance is altered when working with AI.
  • If humans are responsible for an outcome, they should be obliged to remain vigilant, even after they have delegated tasks to an AI.

The principles listed above are only a very short summary. If you’re interested in the topic of ethical decision making in clinical practice, you should read the whole thing.

MIT researchers show how to detect and address AI bias without loss in accuracy

The key…is often to get more data from underrepresented groups. For example…an AI model was twice as likely to label women as low-income and men as high-income. By increasing the representation of women in the dataset by a factor of 10, the number of inaccurate results was reduced by 40 percent.

Source: MIT researchers show how to detect and address AI bias without loss in accuracy | VentureBeat

What many people don’t understand about algorithmic bias is that it’s corrected quite easily, relative to the challenge of correcting bias in human beings. If machine learning outputs are biased, we can change the algorithm, and we can change the datasets. What’s the plan for changing human bias?

My presentation for the Reimagine Education conference

Here is a summarised version of the presentation I’m giving later this morning at the Reimagine Education conference. You can download the slides here.

E.J. Chichilnisky | Restoring Sight to the Blind

Source: After on podcast with Rob Reid: Episode 39: E.J. Chichilnisky | Restoring Sight to the Blind.

This was mind-blowing.

The conversation starts with a basic overview of how the eye works, which is fascinating in itself, but then they start talking about how they’ve figured out how to insert an external (digital) process into the interface between the eye and brain, and that’s when things get crazy.

It’s not always easy to see the implications of converting physical processes into software but this is one of those conversations that really makes it simple to see. When we use software to mediate the information that the brain receives, we’re able to manipulate that information in many different ways. For example, with this system in place, you could see wavelengths of light that are invisible to the unaided eye. Imagine being able to see in the infrared or ultraviolet spectrum. But it gets even crazier.

It turns out we have cells in the interface between the brain and eye that are capable of processing different kinds of visual information (for example, reading text and evaluating movement). When both types of cell receives information meant for the other at once, we find it really hard to process both simultaneously. But, if software could divert the different kinds of information directly to the cells responsible for processing it, we could do things like read text while driving. The brain wouldn’t be confused because the information isn’t coming via the eyes at all and so the different streams are processed as two separate channels.

Like I said, mind-blowing stuff.

Additional reading

a16z Podcast: Network Effects, Origin Stories, and the Evolution of Tech

If an inferior product/technology/way of doing things can sometimes “lock in” the market, does that make network effects more about luck, or strategy? It’s not really locked in though, since over and over again the next big thing comes along. So what does that mean for companies and industries that want to make the new technology shift? And where does competitive advantage even come from when everyone has access to the same building blocks of innovation?

This is a wide-ranging conversation on the history of technology (mainly in Silicon Valley) and the subsequent impact on society.  If you’re interested in technology in a general sense, rather than specific applications or platforms, then this is a great conversation that gets into the deeper  implications of technology at a fundamental level.

The AI Threat to Democracy

With the advent of strong reinforcement learning…, goal-oriented strategic AI is now very much a reality. The difference is one of categories, not increments. While a supervised learning system relies upon the metrics fed to it by humans to come up with meaningful predictions and lacks all capacity for goal-oriented strategic thinking, reinforcement learning systems possess an open-ended utility function and can strategize continuously on how to fulfil it.

Source: Krumins, A. (2018). The AI Threat to Democracy.

“…an open-ended utility function” means that the algorithm is given a goal state and then left to it’s own devices to figure out how best to optimise towards that goal. It does this by trying a solution and seeing if it got closer to the goal. Every step that moves the algorithm closer to the goal state is rewarded (typically by a token that the algorithm is conditioned to value). In other words, an RL algorithm takes actions to maximise reward. Consequently, it represents a fundamentally different approach to problem-solving than supervised learning, which requires human intervention to tell the algorithm whether or not it’s conclusions are valid.

In the video below, a Deepmind researcher uses AlphaGo and AlphaGo Zero to illustrate the difference between supervised and reinforcement learning.

This is both exciting and a bit unsettling. Exciting because it means that an AI-based system could iteratively solve for problems that we don’t yet know how to solve ourselves. This has implications for the really big, complex challenges we face, like climate change. On the other hand, we should probably start thinking very carefully about the goal states that we ask RL algorithms to optimise towards, especially since we’re not specifying up front what path the system should take to reach the goal, and we have no idea if the algorithm will take human values into consideration when making choices about achieving its goal. We may be at a point where the paperclip maximiser is no longer just a weird thought experiment.

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

Bostrum, N. (2003). Ethical Issues in Advanced Artificial Intelligence.

We may end up choosing goal states without specifying in advance what paths the algorithm should not take because they would be unaligned with human values. Like the problem that Mickey faces in the Sorcerer’s Apprentice, the unintended consequences of our choices with reinforcement learning may be truly significant.

PSA: Writing is hard

A few days ago I submitted a chapter for an edited collection on Speculative Futures for Artificial Intelligence and Educational Inclusion and I thought I’d take a moment to share some of my experience in writing it. When I talk about writing with colleagues I get the impression that they’re waiting for the moment when writing becomes easier, and are therefore in a continuous cycle of disappointment because it never does. This public service announcement is for anyone who thinks that you will one day arrive at a point where writing is easy.

The original abstract was submitted about 4 months ago and represented what I thought would make a compelling contribution to the collection. But over time I realised that the argument I was trying to make felt forced and I just couldn’t get enough out of it to make it worthwhile. This is after 2 months and about 6000 words. About a month before the due date I decided to throw most of it away and start again, this time from a new position that I thought was stronger and would make more of a novel contribution. I deleted about 4000 words.

After a few weeks, I had my first full draft of about 8000 words that needed to be cut to 6000. At this point, I started printing it out and editing by hand. After editing on paper I go back to the digital version and rewrite. Then I print it again, edit, revise and print. I usually do this 3-4 times before a final submission. The pictures below were taken on the 3rd revision of the full draft. You can see that I’m still pretty dissatisfied with how things were going. Maybe it’s because I’m not a very good writer, or maybe my thinking was still incoherent.

When I finally submitted the chapter I was still pretty unhappy with it. There were significant parts of it that felt rough. There were still a few weak arguments. Some of the sentences were awkward. And to top it all, I’m still not entirely convinced that the contribution is going to add much value to the collection (because, imposter syndrome). Now that I’ve spent 3-4 months thinking about the topic I can’t help feeling that it’s pretty average.

Maybe I’ll get better with the next one? Maybe that’s the one that will be right. Or, maybe writing is just hard.

When AI Misjudgment Is Not an Accident

The conversation about unconscious bias in artificial intelligence often focuses on algorithms that unintentionally cause disproportionate harm to entire swaths of society…But the problem could run much deeper than that. Society should be on guard for another twist: the possibility that nefarious actors could seek to attack artificial intelligence systems by deliberately introducing bias into them, smuggled inside the data that helps those systems learn.

Source: Yeung, D. (2018). When AI Misjudgment Is Not an Accident.

I’m not sure how this might apply to clinical practice but, given our propensity for automation bias, it seems that this is the kind of thing that we need to be aware of. It’s not just that algorithms will make mistakes but that people may intentionally set them up to do so by introducing biased data into the training dataset. Instead of hacking into databases to steal data, we may start seeing database hacks that insert new data into them, with the intention of changing our behaviour.

What this suggests is that bias is a systemic challenge—one requiring holistic solutions. Proposed fixes to unintentional bias in artificial intelligence seek to advance workforce diversity, expand access to diversified training data, and build in algorithmic transparency (the ability to see how algorithms produce results).

%d bloggers like this: