OpenAI wants to pay someone $500,000 or more to reduce the downside of AI.
If that seems like a lot of money, consider the potential downsides: job loss, misinformation, abuse by malicious actors, environmental damage, and erosion of human agency, to name a few.
CEO Sam Altman said in an X post Saturday that the job is “stressful.” “You'll be diving right into the deep end,” Altman wrote.
Altman said the position of “head of preparedness” is “an important role at a critical time.”
“While the models are rapidly improving and have many great features, they are also beginning to present some real challenges. The potential impact of the models on mental health was predicted in 2025. We are now seeing that the models are very good at computer security, and we are starting to uncover significant vulnerabilities,” he wrote.
OpenAI's ChatGPT is helping to popularize AI chatbots among consumers, many of whom use the technology to research topics, draft emails, plan trips, and perform other simple tasks.
Some users even talk to bots as a substitute for therapy, which in some cases can exacerbate mental health issues and foster delusions and other concerning behaviors.
OpenAI announced in October that it was working with mental health experts to improve the way ChatGPT interacts with users exhibiting concerning behaviors such as mental illness and self-harm.
OpenAI's core mission is to develop artificial intelligence in ways that benefit all of humanity. From the beginning, the company made safety protocols a central part of its operations. But as products began to launch and pressure to turn a profit mounted, some former staffers said the company began to prioritize profits over safety.
In a May 2024 post to X announcing his resignation, Jan Reiki, former leader of OpenAI's now-disbanded safety team, said the company had lost sight of its mission to ensure the safe deployment of technology.
“Building machines that are smarter than humans is an inherently risky endeavor. OpenAI has a great responsibility on behalf of all humanity,” he wrote. “But in recent years, safety culture and processes have taken a backseat to shiny products.”
Less than a week later, another staff member announced his resignation from Company X, citing safety concerns. Former staffer Daniel Cocotadillo said in a May 2024 blog post that he resigned because he “lost confidence in my ability to act responsibly during my time at AGI.”
Cocotajiro later told Fortune that OpenAI originally had about 30 people studying safety issues related to AGI, a still-theoretical version of AI that reasoned like humans, but a series of departures cut that number by nearly half.
Alexander Madrid, the company's head of pre-preparation, assumed his new role in July 2024. This role is part of OpenAI's Safety Systems team, which develops safety measures, frameworks, and evaluations for the company's models. The salary for this job is $555,000 per year plus stock.
“You will be a leader directly responsible for building and coordinating capability assessments, threat models, and mitigations that form a consistent, rigorous, and operationally scalable safety pipeline,” the job posting states.
