Michael Rowe

Trying to get better at getting better

There’s been some discussion about the fact that the next version of OpenAI’s frontier model may not demonstrate much of a leap in intelligence over GPT-4, and I posted some thoughts about that.

But I wanted to explore a different perspective here.

Last month, OpenAI raised $6.6 billion in VC funding that valued the company at $157 billion, closing the largest investment round in history.

This isn’t my area of expertise, but essentially, the way that these investments work is that investors agree to fund companies now with the expectation that they’re going to make a LOT more money later.

Let’s assume that these investors don’t enjoy burning money, and that they’re smart people. I know this last point is going to bother some readers, so let me qualify. I don’t mean that they’re especially intelligent or incapable of making mistakes (see the dot-com bubble). I’m just making the point that they’ve done some level of due diligence, and that they’ve seen something we haven’t. I don’t think they’re simply jumping on the bandwagon.

OpenAI must have showed something to those investors to convince them that they’re likely to see a high return on their investment. I wonder what did that demo looked like?

There are a few speculative scenarios I can think of:

  • This funding round is what OpenAI needs to get to GPT-5, and they’ve convinced the investors that it’s going to be worth it.
  • OpenAI demo’d GPT-5 and that’s been enough to motivate this round of investment fund the development of GPT-6.
  • Or maybe GPT-5 isn’t a huge leap in intelligence, but a huge leap in capability, which could still drive significant changes in society.
  • OpenAI has made breakthroughs in model efficiency or training costs, which enables more to be done with less.
  • They’ve achieved significant advances in multimodal capabilities, enabling more sophisticated integration across text, image, video, and code that go beyond what’s publicly known. For example, the recent announcement of OpenAI’s ‘Operator’ agent.
  • They’ve developed new architectures that scale differently, achieving better results with smaller models, which is more sustainable.
  • They’ve made progress on alignment and control, showing unprecedented advances in model reliability and safety, making large-scale deployment more feasible.

It seems likely that OpenAI has finished training GPT-5, given the time since they released GPT-4. And maybe it isn’t as intelligent as they’d hoped. But I can’t see how these investors would have given OpenAI $6.6 billion unless they saw something impressive. In my opinion, continued scaling in intelligence is nice, but maybe not as nice to advances in safety and alignment. A GPT-4 level model that’s safe might be something we’re happy to integrate into systems, like healthcare and education.

If you’re thinking that investors can’t possibly keep pouring funding into something that’s losing money on an epic scale, remember that the companies building frontier models believe they’re going to get to AGI, and that the path they’re on is the most likely candidate. Maybe they’re wrong but it’s interesting to reflect on the fact that they’re the ones building it. They’re the ones with insight into where we’re headed. They’re at least one version ahead of what we have access to.

Of course, they’re also the ones who will benefit the most from massive valuations and commercial products that run on the back of their models. Maybe it’s true that this is all just smoke and mirrors, and a selfish attempt to make as much money as possible.

If they fail, they get nothing. But if they succeed, they get everything.

At the very least, they believe they’re heading towards something Important. And after this funding round, their investors seem to believe they’re heading towards something Important.

Where is the evidence to say they’re wrong?

My key takeaways from this exploration:

  1. This massive funding round is suggestive of something new in the pipeline.
  2. This ‘something’ need not necessarily be more intelligence.
  3. It may nonetheless be consequential.

Share this


Discover more from Michael Rowe

Subscribe to get the latest posts to your email.