https://www.oneusefulthing.org/p/scaling-the-state-of-play-in-ai
The existence of two scaling laws – one for training and another for “thinking” – suggests that AI capabilities are poised for dramatic improvements in the coming years. Even if we hit a ceiling on training larger models (which seems unlikely for at least the next couple of generations), AI can still tackle increasingly complex problems by allocating more computing power to “thinking.”
I’m sharing this while in the midst of a minor existential crisis, following the release of OpenAI o1.