OpenAI executives discuss how their growing team is helping enterprises adopt AI

AI For Business


OpenAI’s teams are embedded in some of the world’s largest companies to turn AI models into real-world deployments.

Colin Jarvis, who leads OpenAI’s forward deployment engineering team, explained on Thursday’s episode of the Altimeter Capital podcast how his team helps companies create value from “tens of millions of dollars to sometimes less than billions of dollars.”

Jarvis said the team is still small, with 39 engineers, but plans to grow to 52 by the end of the year. OpenAI has 24 openings for its forward-deployed engineering team in the U.S., Europe and Japan, with salaries up to $345,000 plus capital in the U.S., according to job listings.

The term “forward-deployed engineer” was popularized by defense technology software giant Palantir. Here we’re talking about engineers who work directly with clients to fine-tune products on-site.

When ChatGPT debuted in 2022, the model generated “a ton of hype.” “People were really excited, but it was also kind of difficult to get value out of the model,” Jarvis said.

Early enterprise customers struggled to translate that excitement into easy-to-use systems. Jarvis said the only consistently successful approach is to embed directly into customers, learning their workflows and working collaboratively with their staff. This led OpenAI to set up a forward deployment model.

One of the team’s major projects was with Morgan Stanley, which became one of OpenAI’s first enterprise customers to implement GPT-4.

Jarvis said it took six to eight weeks to get the technology footing, and even longer to get financial advisors to trust the technology. The team had to spend an additional four months running the pilot, gathering evaluations, and iterating with wealth advisors.

“In the end, about 98% adopted it,” he said.

The team also worked with a European semiconductor company to build a “debug investigation and triage agent” that can investigate failures and fix bugs. Jarvis said they looked at the company’s entire value chain and found that engineers were spending 70% to 80% of their time debugging chips.

Jarvis said forward-deployed engineering teams need to be clear about their purpose. He added that his team is avoiding “service revenue” and focusing on creating product strategy.

Forward deployment engineering model

Earlier this year, Jarvis announced in a LinkedIn post that he would be leading OpenAI’s new forward deployment engineering capabilities.

“Our focus is on getting customers into production, whether through zero-to-one novel applications of our technology or by helping them scale proven cases,” he wrote in January.

Since then, OpenAI has hired forward deployment engineers around the world, including San Francisco, New York, Dublin, London, Paris, Munich, and Singapore.

Oliver Jay, OpenAI’s international managing director, said in July that the forward deployment engineering model is “a very concrete way to advance advanced AI acceleration into large-scale production cases.”

“We’re here to solve the latest gap between companies,” Jay said at the Fortune Brainstorm AI 2025 conference in Singapore.

Venture investors are also realizing the value this model brings.

Diana Hu, a partner at YC, said on an episode of the Y Combinator podcast in June that she and her team have watched founders close “six- and seven-figure deals” with major companies as forward deployment engineers.

YC CEO Gary Tan also said on the podcast that this model will give AI startups an edge and help them beat out giants like Salesforce, Oracle, and Booz Allen.





Source link