Michael Rowe

Trying to get better at getting better

Modularisation of AI models
I’m not even sure if this is a category, but I’ve been thinking about modularisation in the AI ecosystem. Basically, where you have component parts that are chained together to achieve objectives that no single component can achieve independently.

When OpenAI enabled plugins for ChatGPT they extended the functionality of their language model in the same way that plugins extend the functionality of web browsers. So now you can add the ‘computation’ plugin (in the form of Wolfram Alpha) that enables high-level computation in ChatGPT. Many of the shortcomings of language models (for example, their inability to do basic maths) will be addressed by plugins.

And then I learned about Langchain, which is a software development framework for developing large language model applications, where the language models are abstracted out of the process. You could use something like Langchain when you have a goal which no single model can achieve on its own, but is achievable when you combine models. So you might use one AI model to summarise something, another to draft a plan based on the summary, another one to describe an execution of the plan, and so on. And Langchain is the framework taking outputs from one model, and passing it as an input to another.

Using this framework you can create bespoke applications that use different models to solve problems you care about. A simple example I came across this week was in Dave Nicholls’ Paradoxa substack, where he describes creating his own app for the ISIH conference.

Recently, I’ve been playing with tools Build AI, which allow you to ask an app to generate other apps to answer specific questions. I asked it to build an app for the ISIH conference in 2024 that showed people cafes and restaurants within five kilometres of where they were staying, ranked by consumer reviews in Google Maps. It did it in seconds, and it’s amazing.

And this got me thinking about the creativity and problem-solving ability of society when each of us starts using AI to build single-use apps that each address very specific problems for each of us. If it takes 2 minutes to build an app using natural language, and each app we build is simply a ‘control app’ that uses APIs to connect to plugins and other AI models, then do we still need Word, or Excel, or Outlook? Do we need Google? Do we need lawyers, or physiotherapists?

I wonder if this is what Elon Musk is talking about with the ‘everything app’ he’s building with his new company?


Open Assistant and HuggingChat
Last week I mentioned Open Assistant but hadn’t used it yet. This week I had a look at HuggingChat, which is a web-based interface to the Open Assistant open-source language model being developed by HuggingFace. The interface is a lot like ChatGPT, so should be familiar to anyone wanting to experiment. The output isn’t as good as other, more prominent language models (see the outputs below; first Open Assistant, then Bard, then ChatGPT), but it’s worth knowing about and following.


Levels of autonomy in medical AI
I’ve tended to think of medical AI in binary terms; it’s either there or it isn’t. But it seems more likely that we’ll have different levels of autonomy in clinical practice, in the same way that we see different levels of autonomy in self-driving cars.

Linked to this, we could probably start thinking about different levels of autonomy in AI systems that help us learn.


How to read a paper series
I’d like to go back and read the series again, so I’m posting this here as a kind of social pressure to push me in that direction.

This week I was reminded of the How to Read a Paper series, published by the British Medical Journal. The series provides a great overview of different approaches for reading different kinds of journal articles, depending on the research method. These relatively short pieces are a good starting point for anyone needing a more strategic approach to reading the academic literature.

Here are a few examples of the kinds of papers in the series:

  • Getting your bearings and deciding what the paper is about (http://www.bmj.com/cgi/content/full/315/7102/243)
  • Papers that go beyond numbers, which is about qualitative research (http://www.bmj.com/cgi/content/full/315/7110/740)
  • Papers that summarise other papers, which is about systematic reviews (http://www.bmj.com/cgi/content/full/315/7109/672).

Share this


Discover more from Michael Rowe

Subscribe to get the latest posts to your email.