Anthropic’s release of the Claude Mythos Preview tool earlier this month marks a critical mass in the development of artificial intelligence. This shows that those who were highlighting the potential dangers of AI were not crying wolf.
According to Anthropic, Mythos can identify and exploit flaws in any operating system and web browser at a scale and speed that is beyond the capabilities of almost any human. It can autonomously carry out attacks against systems that bring down critical national infrastructure, such as power, water, healthcare, and banking systems.
Its creators believe this model is so dangerous that they haven’t released it to the public yet, instead providing access to about 40 organizations, including competitors (they’re calling it Project Glasswing), so they can test it on their own systems to uncover and fix flaws before someone with malicious intent discovers and exploits them.
Last Friday, Anthropic CEO Dario Amodei met with the Trump administration seeking access to the model.
Unsurprisingly, the administration has identified Anthropic as a national security and supply chain threat and has claimed that it will be barred from doing business with the government or companies that do business with the government, after Anthropic sought to prevent the government from using one of its tools for self-control of arms and mass domestic surveillance.
President Trump described Anthropic, which prides itself on a safety-first approach to AI, as a “radical-left, woke company” full of “left-wing weirdos” who he said he would “fire like dogs” and never do business with again. Now, the regime is urgently seeking assistance from the lunatics to avert a national security threat.
“Everyone is aware [AI tools] It has tremendous economic value, but must be carefully constructed. If not built correctly, it can kill you. ”
Anthropic CEO Dario Amodei
U.S. Treasury Secretary Scott Bessent, along with Federal Reserve Chairman Jerome Powell, convened a meeting of the nation’s largest banks earlier this month to discuss the threat Mythos poses to the U.S. banking and financial system.
Although the administration is taking Mythos seriously, it appears that Mythos constitutes a national security threat, not just to the United States, unlike the tools previously kept in secrecy between the company and the administration, as it poses a superhuman threat through its potential ability to not only expose software flaws but also exploit them to jeopardize the financial system, the economy, public safety, and national security.
Mythos’ superpower appears to be the ability to identify and chain multiple different vulnerabilities within a system, allowing him to launch attacks of unprecedented scale and breadth. Worryingly, the company fled the test environment (where developers ventured into it), took “reckless and excessive steps” and attempted to cover up what they had done.
This was a major topic of discussion at the International Monetary Fund and World Bank’s semi-annual meeting in Washington last week, as so much progress is seen as a myth. The matter was also discussed by G7 finance ministers and central bankers, who reportedly discussed the need for an international institutional framework to oversee the governance of AI.
Mythos is just the first of many products with similar functionality. OpenAI said it is close to releasing a tool to identify coding flaws.
Perhaps fortunately, Anthropic, operating within a self-proclaimed and imposed moral and ethical framework and emphasizing a safety-first approach to development, was the first to be left out of the ranks. This allowed for at least some discussion and corrective action.
But it highlights our dependence on individual developers resisting commercial pressure to capitalize on advances and provide guardrails for AI development, especially in the United States, the epicenter of AI development.
At the federal level, the United States lacks meaningful regulation of AI. Almost as soon as he took back the White House, and after intense lobbying by AI advocates and massive campaign donations, President Trump removed the Biden administration’s executive order that set very basic safety, security, and privacy standards for AI development.
The administration has adopted a broader industry view (with Anthropic being an exception) that any regulation stifles creativity and development and puts the United States at a disadvantage in the competition with China for AI supremacy.
President Trump has ordered U.S. government agencies to eliminate policies that could “hinder U.S. AI supremacy.”
While some US states (e.g. California) have enacted lighter regulations on AI, the only comprehensive regulatory regime is the EU, which can only regulate products sold within the EU.
Unless the revelation of Mythos’s powers shocks the US into action, it is unlikely that anything will change during President Trump’s time in office, and the AI industry reportedly raised $300 million ($420 million) in this year’s midterm elections to challenge candidates (mostly Democrats) who advocate AI regulation.
No one denies the potential of AI to transform economies and societies, but those who know the technology best recognize its risks.
This means the world depends on companies spending trillions of dollars, or US dollars, to develop tools that are advancing at a rapid pace, but whose potential is not well understood even by the developers.
These companies are investing amounts unimaginable before AI for meager short-term returns, and are under commercial pressure from shareholders and potential capital providers they rely on to fund model development and the data center infrastructure needed to train the models.
Can we rely on them to self-regulate and prioritize safety in increasingly automated models?
For example, Anthropic’s Amodei writes: “Non-specialists are often surprised and alarmed to learn that we don’t understand how the AI we create works.
“This lack of understanding is essentially unprecedented in the history of technology,” he added.
OpenAI’s Sam Altman said he doesn’t think it’s right for a “few AI labs” to make the most important decisions about its future shape.
We regulate the aviation industry. The nuclear power industry is subject to national and global regulatory and/or oversight. The banking system is regulated, with global systems selected for special treatment developed by international prudential regulators. The pharmaceutical and automotive industries are highly regulated at the national level.
No one denies the potential of AI to transform economies and societies, but those who know this technology best—people like Amodei and Altman—recognize its dangers.
After the release of Mythos, Amodei said that AI should be regulated the same way as cars and airplanes.
“Everyone knows who they are. [AI tools] It has tremendous economic value, but must be constructed carefully. If not built correctly, it can be deadly. ”
Our Business Briefing newsletter brings you top stories, exclusive coverage and expert opinion. Sign up to receive it every weekday morning..
