I can’t control the amount of letters I receive asking me why I quit.

AI For Business


Corporate resignations rarely make the news, except at the highest levels. But over the past two years, a flurry of X posts, Substack open letters, and public statements from prominent artificial intelligence researchers have created a new literary form, the AI ​​resignation letter, an event that unearths meaning with each addition. Taken together, the canon of these letters, some of which are clearly bound by non-disclosure agreements and other loyalties, whether legally enforced or not, tells us a lot about how those at the top of the AI ​​industry view themselves and their industry’s trajectory. The overall image is dark.

Last week, several additions were made to the record of letters about why I left this incredibly valuable company working on cutting-edge technology, including a letter from a researcher at xAI and a New York Times op-ed from a retired OpenAI researcher. Perhaps the most unusual move came from Mrinank Sharma, who was appointed head of Anthropic’s safeguards research team a year ago, and announced his exit from a major AI startup that is considered more secure. He posted a 778-word letter to X that was at times romantic and dark, quoting poets Rainer Maria Rilke and Mary Oliver. The letter, which discusses the safety of AI, his own experiences working with AI sycophants and “AI-assisted bioterrorism,” and the “polycrisis” sweeping society, included three footnotes and some vague but ominous warnings.

“We seem to be approaching a threshold at which we must grow in wisdom on par with our ability to influence the world, lest we face the consequences,” Sharma wrote. “Throughout my time here, I have seen again and again how difficult it is to have our values ​​truly reflected in our actions.”

Sharma said his final project at Anthropic was to “understand how assistant Al dehumanizes or distorts humanity.” This is perhaps a nod to the AI ​​psychosis scourge and other emerging harms that result from people overestimating their relationships with chatbots. He said he did not know what he would do next, but expressed a desire to “earn a degree in poetry and dedicate himself to practicing courageous public speaking.” Finally, the researchers included the full text of “The Way It Is” by poet William Stafford.

In the record for AI resignations, Sharma’s message may not be as dramatic as the board coup that ousted OpenAI CEO Sam Altman for five days in November 2023. This is less alarming than other doomsday warnings issued by AI safety researchers who left their jobs because they believed their employers were not doing enough to mitigate the potential harms of smarter-than-human artificial general intelligence (AGI). AI companies are competing to develop it. (Some AI experts question whether AGI is even possible or what it means.)

But Sharma’s memo reveals the deep attachment that top AI researchers – who are highly compensated and work collaboratively in small teams – feel for their work, their colleagues and, in many cases, their employers. The resignation announcement also reveals some of the tensions that have arisen over and over again. At top AI labs, there is fierce competition for resources between research/safety teams and those working on consumer-facing AI products. (There seems to be little, if any, public resignation from people on the product side.) There’s pressure to ship without proper testing, established safeguards, or knowing what will happen if the system goes rogue. And there is a deep sense of mission and purpose that can be overridden by feelings of betrayal.

Many of those who have publicly left AI companies work in the safety and “coordination” fields, which are tasked with ensuring that AI capabilities are aligned with human needs and well-being. Many of them seem very optimistic about AI, and even AGI, but are concerned that financial pressures are eroding their safety nets. Few seem to have given up on the field completely, except perhaps Sharma, an aspiring poet. They either jump to another seven-, eight-, or nine-figure paying job at a competing AI startup, or become a citizen-oriented AI analyst or researcher at one of a growing number of AI think tanks.


Sam Altman

When Miles Brundage resigned from OpenAI’s AGI preparation team in 2024, he wrote that “OpenAI, the other frontier labs are not ready, and the world is not ready” for AGI.

Shelby Tauber/Reuters



They all seem to be worried that either grand good or grand disaster lies ahead. “AI is advancing rapidly. The potential benefits are great, and so are the risks of extreme, even irreparable harm,” Dylan Scandinaro, who announced earlier this month that he was leaving Anthropic to become head of preparedness at OpenAI, wrote on LinkedIn. Daniel Cocotadillo, who resigned from OpenAI, said that OpenAI’s system “could be the best thing that ever happened to humanity, but it could also be the worst thing to happen if we don’t proceed with caution.”

Six members of the founding team recently left xAI, where co-founder Elon Musk was notorious for tinkering with the proverbial dial on the Grok chatbot. But at the center of the AI ​​resignations as a kind of industry artifact is OpenAI, a high-profile startup whose key players, including top executives and safety-minded researchers, have been leaving the company over the past two years. Some resigned. Some people were fired. It was reported in the media that some of them were “forced to leave” due to internal disputes. Seven people left in a short period of time in the first half of 2024.

As the revenue pales in comparison to massive and growing infrastructure costs, OpenAI recently announced that it will begin incorporating advertising into ChatGPT. That led researcher Zoe Hitzig to quit. This week, she published her resignation letter in the Times, warning of the potential impact of advertising becoming part of the foundation of chatbot conversations. “ChatGPT users generated an unprecedented archive of human candor, in part because they believed people were talking about something with no ulterior motives,” she wrote. But she warned that OpenAI appears prepared to use its “archive of human candor” to target ads and undermine user autonomy, just as Facebook has done. Consumers can be manipulated to maximize engagement. This is a classic sin of the modern internet.

If you’re going to develop an invention that changes the world, you need to be able to trust your leadership. That was the problem with OpenAI. On November 17, 2023, the company claimed that Mr. Altman was dramatically terminated by the company’s board of directors for “not consistently being candid in his communications with the board.” Less than a week later, he was reinstated in a coup in his own boardroom and subsequently consolidated his power. From there, the escape began.

On May 14, 2024, OpenAI co-founder Ilya Sutskever announced her resignation. Sutskever was replaced as head of OpenAI’s super alignment team by John Schulman, a co-founder of another company. A few months later, Schulman left OpenAI and joined Anthropic. Six months later, he announced his move to Thinking Machines Lab, an AI startup founded by Mira Murati, the former OpenAI CTO who became OpenAI’s interim CEO during Altman’s brief dismissal.

The day after Sutskever left OpenAI, Jan Leike, who was also responsible for OpenAI’s coordination efforts, announced his resignation on X. “OpenAI has a great responsibility on behalf of all humanity,” Reich wrote, but the company’s “safety culture and processes take a backseat to its shiny products.” I thought, “OpenAI must become an AGI company that puts safety first.” Less than two weeks later, Reik was hired by Anthropic. OpenAI and Antrhopic did not respond to requests for comment.

At OpenAI, departing researchers say experts in charge of collaboration and safety are often sidelined, pushed out or distributed to other teams, leaving researchers with a sense that the AI ​​company is intent on building inventions they have no control over. When Miles Brundage resigned from OpenAI’s AGI preparation team in 2024, he wrote, “The bottom line is that neither OpenAI nor the other frontier labs are ready for AGI. Neither is the world.” Still, he didn’t criticize the company directly, adding, “Working at OpenAI is one of the most impactful things most people could ever want to do.” Brundage currently runs the AI ​​research institute AVERI.

Across the AI ​​industry, the situation is much the same. In public statements, leading researchers often gently criticize, or sometimes criticize, their employers for pursuing potentially apocalyptic inventions while emphasizing the need to conduct such research. Sometimes, they provide “cryptic warnings” that make AI watchers scratch their heads. Some people seem genuinely alarmed by what’s going on. When OpenAI safety researcher Steven Adler left the company in January 2025, he wrote that he was “quite frightened by the pace of AI development” and wondered if it would wipe out humanity.

But among the many AI resignation letters, there is little discussion of how AI is currently being used. Data center construction, resource consumption, mass surveillance, ICE deportations, weapons development, automation, workforce disruption, the slop epidemic, the education crisis—these are areas where many recognize that AI is impacting their lives, sometimes negatively, but devout industry retirees don’t have much to say about all of these. Their warnings of some disaster on the horizon become fodder for the tech press and a de facto cover letter to the industry’s next job, but they don’t reach the general public.

“Tragedy happens. People get hurt and die. And people suffer and grow old,” William Stafford wrote in a poem shared by Mrinanku Sharma. It’s frightening, especially the tone of passivity and inevitability, one might call it resignation. Sometimes it feels as if no amount of protest is enough, and as Stafford writes in the next line, “no matter what we do, we can’t stop time from unfolding.”


jacob silverman I’m a Business Insider contributor. He is most recently the author of Gilded Rage: Elon Musk and the Radicalization of Silicon Valley.

Business Insider’s Discourse articles provide perspectives on the most pressing issues of the day, powered by analysis, reporting and expertise.





Source link