Michael Rowe

Trying to get better at getting better

This digest has an AI and machine learning focus because I’m preparing a presentation for the SAAHE conference next week, and my topic is Clinicians’ perceptions of the introduction of AI into clinical practice. It’s from an international survey I completed in 2019, mostly forgot about in 2020 (because, Covid) and am finally trying to wrap up now. Anyway, my reading and thinking has been focused on this for the last week or so.


Heidelberg. (2021, May 4). Springer Nature advances its machine-generated tools and offers a new book format with AI-based literature overviews. Springer Nature Group.

It was very exciting to be part of such an innovative experiment. It enabled me to discover interesting aspects I had previously neglected, stimulating me to find out additional citations and references. The AI was able to find such connections producing a wealth of data which are summarized in the chapters of the book.

I can’t say anything about the quality of the book only that it’s interesting to note that it’s possible to us an algorithm to create a literature review. And considering how difficult it is to do a good literature review (most are not very good), I’m fairly confident that algorithms will soon reach a point where they’re producing reviews of the literature that are at least as good as those produced by us.


Greene, T. (2020, April 14). Google’s AutoML Zero lets the machines create algorithms to avoid human bias. The Next Web | Neural.

Machines making their own algorithms, just like nature intended.

Perhaps the most interesting byproduct of Google‘s quest to completely automate the act of generating algorithms and neural networks is the removal of human bias from our AI systems. Without us there to determine what the best starting point for development is, the machines are free to find things we’d never think of.

I’ve always thought it’s unfair to talk about machine learning bias as if it’s the fault of the algorithm. The algorithm is trained on data generated by human beings, and it’s our bias that’s reflected in the outcomes. Human beings make choices about what data to collect, how to collect it, how to label it, how to design the training process, what algorithms to train, what outcomes are valued, and so on. We also built the cultural, social, legal, ethical and commercial norms within which we generate the data in the first place. So it’s human beings who are biased and whose bias influences algorithmic outcomes. But no-one seems to be interested in trying to reduce the influence of human bias in our own decision-making, which is sub-optimal across the board. I’ve always thought that the best way to reduce bias in decision-making is to remove the human so it’s nice to see things like AutoML starting to do just that. At some point we should acknowledge that, in many scenarios, all we’re doing is adding noise.

See also Real, E., Liang, C., So, D. R., & Le, Q. V. (2020). AutoML-Zero: Evolving Machine Learning Algorithms From Scratch. ArXiv:2003.03384 [Cs, Stat]. http://arxiv.org/abs/2003.03384


Kahng, A. B. (2021). AI system outperforms humans in designing floorplans for microchips. Nature, 594(7862), 183–185.

Modern chips are a miracle of technology and economics, with billions of transistors laid out and interconnected on a piece of silicon the size of a fingernail. Each chip can contain tens of millions of logic gates, called standard cells, along with thousands of memory blocks, known as macro blocks, or macros. The cells and macro blocks are interconnected by tens of kilometres of wiring to achieve the designed functionality.

Mirhoseini et al. estimate that the number of possible configurations (the state space) of macro blocks in the floorplanning problems solved in their study is about 102,500. By comparison, the state space of the black and white stones used in the board game Go is just 10360.

First of all, this kind of complexity is just insane. I knew that chip design was complicated but I didn’t really have a good idea of the scales involved.

One of the problems in chip design is the painstaking process of adding the macro blocks to the chip floorplan. You end up placing blocks that later have to be moved because of how you’re laying them out. Design-choices made in the beginning influence what constraints you have to work with later, and changes to later placements have a knock on effect that mean having to move earlier blocks. But that’s not what happens with the algorithm, which seems as if it’s looking into the future and predicting what blocks will need to go where, which enables it to place blocks now that won’t need to be adjusted later. This kind of prediction and management of complexity is an example of something that we – humans – simply can’t conceive of doing without augmentation.

What’s possibly even more interesting is that the researchers approached the problem of block placement on chip floorplan as if it was a board game like Go. If you think about it, placing blocks onto a bounded space in optimal configurations that lead to specific outcomes that are quantitatively superior to other placements, is pretty much what games like chess and Go consist of. While I don’t think that this counts as transfer learning, it’s definitely an interesting example of analogy, where the algorithm is being used in one context that is analogous to another. This feels like something important.


Koenig, R. (2021). Why Education Is a ‘Wicked Problem’ for Learning Engineers to Solve. EdSurge News.

We have not yet started thinking about how humans will react to those machines. And what do we need to teach humans about those machines so that the human-machine collaboration is an effective one?

This simply isn’t true. We’ve been thinking about the problem of interacting with machines for a very long time. It’s called science fiction and we have many different lines of inquiry as to how this might play out. From movies, to books, to blog posts, to tweets, we have thousands of people who spend a lot of time thinking carefully about how we might react to intelligent machines. This comment is just a lack of imagination.


Share this


Discover more from Michael Rowe

Subscribe to get the latest posts to your email.


Comments

One response to “Weekly digest (14-18 Jun 2021)”

  1. Wendy Walker avatar
    Wendy Walker

    I love your final paragraph in this post: it’s SO true, some very good minds have been pondering these questions for a long time.
    I wonder if I may ask for a copy of the study in your previous post about AI replacing physios, as I don’t have institutional membership, & the abstract looks fascinating. Thank you.