I think it’s important to acknowledge that companies like AI and Google are not interested in building commercial chatbots to give us mundane utility. Not really.
roon (an OpenAI employee and widely respected technical AI developer) articulates this clearly in a recent thread on Twitter:
…close to 100% of the fruits of ai are in the future, from self-improving superintelligence…every model until then is a minor demo / pitch deck to hopefully help raise capital for ever larger datacenters…core algorithmic and step change progress towards self-improvement is what matters

Almost everyone at the cutting edge of development at OpenAI is working to craete artificial general intelligence (or, more likely, superintelligence). Given that they are true believers (i.e. this isn’t about profit or market share), it makes sense that they have fewer concerns about their impact on climate change, misinformation, and so on.
OpenAI’s goal isn’t to build commercial products for us, and nor are they especially concerned about the short-term impacts of how we use those products. Their goal is to be the first company to create AGI / ASI, and after the release of OpenAI-o1 they’re moving on to building AI agents as the next step in that process.
The development of AI agents aligns with the third level of OpenAI’s recently introduced five-level scale for measuring progress towards AGI. OpenAI currently considers itself at the threshold of the second stage, known as “reasoners,” with its o1 model.
While we fret over the minutiae of how, when, and where our students can use LLMs, OpenAI is laser focused on building our final invention.
Maybe OpenAI will create AGI, and maybe they won’t. I think it’s clear, though, that they believe they can. I feel like we should take a step back every now and again, and focus on the bigger picture.