For years, philosophy majors have been the subject of jokes about unemployable degrees.
Some of them are now being hired by the world’s most powerful AI companies to help shape the way machines think and behave. Includes a six-figure salary.
As AI systems become more powerful and integrated into everyday life, companies are increasingly grappling with how these systems behave, what values they reflect, and how trustworthy they can be.
This has created a niche but growing demand for philosophers and other people trained to ponder these questions.
“This is definitely a growing trend,” future of work expert Rabin Jestasan told Business Insider.
“AI and the decisions it makes and enables are under increasing scrutiny every day, and their role is critical in meeting this challenge,” he added.
small but powerful
A small but growing group of philosophers is already embedded within the AI Institute.
Amanda Askell, a Ph.D. He holds a degree in philosophy from New York University and is Anthropic’s resident philosopher. She writes on her website that her team’s role is to train Anthropic’s chatbot, Claude, to be more honest and to develop better character traits: to be inherently good.
Iason Gabriel, who previously taught moral and political philosophy at the University of Oxford, is an in-house philosopher and research scientist at Google DeepMind. He focuses on AI ethics and ensuring that AI systems align with human values and goals.
Henry Shefrin, an AI ethicist and professor at the University of Cambridge, will also join DeepMind as a philosopher in May.
Workplace experts and recruiters say this change is real, but it’s still early.
“Over the past few months, we’ve seen an increasing conversation in the market about AI companies hiring people into roles that align with their philosophical backgrounds,” Ben Eubanks, chief research officer at human capital advisory firm Lighthouse Research and Advisory, told Business Insider.
He said the evidence was largely anecdotal and the number of roles was still too small to show up clearly in job market data.
The hiring drive is being driven by broader concerns within the industry about how much users, businesses and governments can trust AI systems, said Firas Sozan, CEO of Harrison Clark, a specialized search and venture firm focused on cloud, data and AI talent for venture capital-backed startups.
“As AI has grown, there’s been a natural focus on trust and how to create layers of governance that allow us to control the technology in a more human way,” he told Business Insider.
Still, Sozan cautioned against exaggerating this trend.
“I wouldn’t say it’s a trend yet,” he says. “The data is still in its infancy.”
Companies like Google DeepMind are starting to hire candidates with philosophy backgrounds. Jakub Porzycki/NurPhoto (via Getty Images)
Forming an AI model
The appeal of being a philosopher is simple and clear.
AI systems have already shown that they can produce harmful outputs and behave in unpredictable ways, from coding agents deleting operational databases and fabricating results, to models attempting to intimidate and thwart shutdown efforts, increasing pressure on companies to ensure safety and alignment with human values.
“AI companies are hiring AI now because not all of the problems in developing AI are technical,” said Annette Zimmerman, assistant professor of philosophy at the University of Wisconsin-Madison. “Defining complex concepts and defending value-based arguments is at the heart of AI, and philosophers are trained to do just that.”
The role of safety and ethics has existed in the technology industry for many years, but the job is changing.
“Traditionally, business ethicists were mentors,” said Susannah Schellenberg, a philosophy professor at Rutgers University. “Work at the Frontier AI Lab is different because philosophers help shape the objects themselves.”
Their work currently includes model specification, configuration, and creation of behavior policies. According to Schellenberg, these tasks don’t just comment on AI models, they directly shape them.
From theory to high-paying jobs
Philosophy majors earned an average wage of $52,000 early in their careers and about $80,000 in their mid-careers, according to the New York Fed’s latest report on labor market outcomes for college graduates. These median salaries are in line with median salaries for other humanities graduates.
At the top end of AI ethics, safety and governance roles, base salaries can reach the $250,000 to $400,000 range due to intense competition for talent, Sozan said.
Some of these roles are already emerging across the industry, but are often highly specialized at senior levels.
For example, Blackbaud hires AI governance specialists with base salaries of $117,200 to $157,500. The job description calls for expertise in ethics, including candidates with a background in philosophy.
Meanwhile, Google DeepMind is hiring an emerging AI ethics and safety impact manager with a base salary of $212,000 to $231,000. At least 5 years of experience in AI ethics and safety in a governance, policy, legal, or research role is required.
A few more junior roles are starting to emerge. For example, Sony Research recently promoted an AI ethics internship focused on assessment, guardrails, and responsible AI. The job description calls for candidates with a degree in socio-technical AI, such as ethics or philosophy.
Still, jobs remain rare. Jestasan estimates that most companies hire fewer than 10 people for these roles.
AI companies are increasingly hiring candidates with backgrounds in ethics and philosophy. Indranil Aditya/NurPhoto via Getty Images
Skepticism and limitations
The rise of philosophers in AI has been described as a kind of “revenge on the humanities” as companies rediscover the value of critical thinking and ethical reasoning in an AI-driven world.
But not everyone is convinced that this shift will bring about tangible change.
About a decade ago, several tech companies established AI ethics committees or advisory groups to guide how AI is developed. These include Google’s internal ethics committee related to its 2014 acquisition of DeepMind and Microsoft’s Aether committee, created in 2017 to oversee AI research.
Companies such as Google, Facebook, Amazon, and IBM also launched AI partnerships in 2016 to address the social and ethical implications of the technology.
“What we’ve found is that these boards are often front and center,” Eubanks said, adding that companies often prioritize commercialization over ethical concerns.
Deborah Johnson, a pioneer in computer ethics, said companies may be more concerned with showing responsibility than being responsible.
“My cynical view is that tech companies just want to ‘look’ like they’re committed to ethics,” she says.
Johnson said the pressures driving AI development, such as speed, competition and profit, could limit the influence philosophers can have in practice.
“They’re under pressure to resolve the situation quickly,” she said. “Ethical considerations slow things down.”
“Whether they have an ethicist or not, they won’t listen to anything that slows them down,” she added.
