As Bloomberg reports, OpenAI has developed an internal scaler to chart the progress of large-scale language models on the road to artificial general intelligence (AGI).
AGI typically refers to AI with human-like intelligence and is considered a broad goal for AI developers. In a previous reference, OpenAI defined AGI as “highly autonomous systems that outperform humans at most economically valuable tasks,” a point that is well beyond the capabilities of current AI. This new measure aims to provide a structured framework for tracking and benchmarking progress in that pursuit.
The scale introduced by OpenAI breaks down progress on the road to AGI into five levels or milestones. ChatGPT and its rival chatbots are at level 1. OpenAI claimed it is reaching level 2, an AI system with capabilities comparable to humans and PhDs at solving basic problems. This may be a reference to GPT-5, which OpenAI CEO Sam Altman said would be a “giant leap forward.” After level 2, the levels get increasingly complex. Level 3 is an AI agent that can handle tasks without a user, while level 4 AI actually invents new ideas and concepts. At level 5, AI will be able to take over tasks not just for individuals but for entire organizations.
Level up
The idea of levels makes sense for OpenAI and for any developer: in fact, a comprehensive framework would not only be useful internally for OpenAI, but could also set a universal standard that can be applied to evaluate other AI models.
Still, AGI won't come soon enough: Previous comments by Altman and others at OpenAI suggest as little as five years, but timelines vary widely among experts, and the amount of computing power required, as well as the financial and technical challenges, are considerable.
This is a very real concern, on top of the ethical and safety issues raised by AGI. There are very real concerns about how that level of AI will impact society. And OpenAI's recent moves may not be reassuring to everyone. In May, the company disbanded its safety team following the departure of its leader and OpenAI co-founder Ilya Sutskever. High-level researcher Jan Leike also resigned, citing concerns that OpenAI's safety culture was being ignored. That said, by providing a structured framework, OpenAI aims to set concrete benchmarks for its models and those of its competitors, helping to prepare us all for what's to come.