OpenAI executive: AI is approaching the capabilities of a research intern

AI For Business


OpenAI is moving closer to one of its milestone goals: a system that can function at the same level as a research intern.

On Thursday’s episode of the “Unsupervised Learning” podcast, OpenAI chief scientist Jakub Paciocki said recent advances in coding, advancements in mathematical research capabilities, and advances in physics suggest that AI is on track to handle increasingly complex, multi-step technical tasks with less human oversight.

“I definitely see this as a signal that something is on track here,” he said.

He said the key measure is how long the model can operate almost autonomously.

“The way we distinguish research interns from fully automated researchers is the period of time we let them operate almost autonomously,” Paciocchi said, citing longer task durations as a key indicator of progress.

Paciocchi outlined OpenAI’s internal goals for a company in October to train “AI research interns” by September 2026, followed by fully autonomous AI researchers by March 2028.

Posted by X After the livestream, OpenAI CEO Sam Altman said that OpenAI “could completely fail” on its goals, but that it was important to be transparent given the potential impact.

“The explosive growth of coding tools”

Paciocchi said the company is already making rapid progress on the types of tasks important to its goals, noting that coding agents like Codex are now handling much of the company’s programming work.

He also pointed out that mathematical benchmarks are the “North Star” for improving model inference because they are easy to verify.

“We’ve seen explosive growth in coding tools,” Paciocchi said. “For most people, the act of programming has changed considerably.”

He added that the short-term challenge is to move toward systems that can tackle specific technical tasks with more autonomy, use more computing, and operate for longer periods of time.

“For more specific technical ideas, like how to improve the model or how to do this evaluation differently, I think we have enough pieces to just about put it together,” he said.

Still, Paciocchi was clear that AI is not ready to operate independently at the level of a full-fledged researcher.

“You’re not going to have a system this year where you can just say, ‘Improve the model, fix the adjustments,’ and they’ll do it,” he said.