ChatGPT's developer, OpenAI, is working on a project codenamed “Strawberry,” according to internal documents seen by Reuters. Details have not been previously reported, but the project is reportedly aimed at demonstrating advanced inference capabilities within models offered by the Microsoft-backed startup.
As Reuters reported exclusively, the OpenAI team is actively developing Strawberry, as outlined in internal documents published in May. No exact timeline for its public release has been set. Sources familiar with the matter say Strawberry is a key component of OpenAI's plans to overcome these challenges. The documents reviewed by Reuters described what Strawberry aims to achieve, but not how.
How “Strawberry” works is a “closely guarded secret”
The inner workings of Strawberry are a closely guarded secret, even within OpenAI, but the project uses a specialized Strawberry model that allows the AI system not just to generate answers but also to navigate the internet autonomously for what OpenAI calls “deep research.” How Strawberry works is a closely guarded secret, even within OpenAI, the people said.
According to interviews with AI researchers, this functionality has not been achievable in AI models until now. An OpenAI spokesperson said they believe the inference capabilities of these systems will improve over time, and emphasized their ongoing research into new AI capabilities.
Formerly known as Q* Strawberry, the technology was already considered a breakthrough within the company: earlier this year, OpenAI demonstrated Q* being able to answer complex scientific and mathematical problems that off-the-shelf models could not.
At an all-hands meeting, OpenAI announced a research project with new human-like reasoning skills. It is unclear whether this project is Strawberry, but the company hopes that this innovation will significantly improve the reasoning capabilities of AI models. Strawberry specializes in pre-training AI models with large datasets.
“It's exciting and scary.”
Strawberry reportedly bears similarities to a technique called “Self-Taught Reasoner,” or “STaR,” developed at Stanford University in 2022, one of the sources familiar with the matter said. STaR allows an AI model to “bootstrap” itself to a higher level of intelligence by iteratively creating its own training data, and could theoretically be used for language models to surpass human-level intelligence, one of its developers, Stanford professor Noah Goodman, told Reuters.
“I think that's exciting and frightening at the same time… If things continue to move in that direction, we as humanity have some serious things to think about,” said Goodman, who is not affiliated with OpenAI and is not familiar with Strawberry.
As Reuters reported exclusively, the OpenAI team is actively developing Strawberry, as outlined in internal documents published in May. No exact timeline for its public release has been set. Sources familiar with the matter say Strawberry is a key component of OpenAI's plans to overcome these challenges. The documents reviewed by Reuters described what Strawberry aims to achieve, but not how.
How “Strawberry” works is a “closely guarded secret”
The inner workings of Strawberry are a closely guarded secret, even within OpenAI, but the project uses a specialized Strawberry model that allows the AI system not just to generate answers but also to navigate the internet autonomously for what OpenAI calls “deep research.” How Strawberry works is a closely guarded secret, even within OpenAI, the people said.
According to interviews with AI researchers, this functionality has not been achievable in AI models until now. An OpenAI spokesperson said they believe the inference capabilities of these systems will improve over time, and emphasized their ongoing research into new AI capabilities.
Formerly known as Q* Strawberry, the technology was already considered a breakthrough within the company: earlier this year, OpenAI demonstrated Q* being able to answer complex scientific and mathematical problems that off-the-shelf models could not.
At an all-hands meeting, OpenAI announced a research project with new human-like reasoning skills. It is unclear whether this project is Strawberry, but the company hopes that this innovation will significantly improve the reasoning capabilities of AI models. Strawberry specializes in pre-training AI models with large datasets.
“It's exciting and scary.”
Strawberry reportedly bears similarities to a technique called “Self-Taught Reasoner,” or “STaR,” developed at Stanford University in 2022, one of the sources familiar with the matter said. STaR allows an AI model to “bootstrap” itself to a higher level of intelligence by iteratively creating its own training data, and could theoretically be used for language models to surpass human-level intelligence, one of its developers, Stanford professor Noah Goodman, told Reuters.
“I think that's exciting and frightening at the same time… If things continue to move in that direction, we as humanity have some serious things to think about,” said Goodman, who is not affiliated with OpenAI and is not familiar with Strawberry.