Canadian computer scientist Yoshua Bengio has raised concerns about the limited release of Anthropic’s latest AI model, Claude Mythos. Bengio, known for his contributions to deep learning and considered one of the godfathers of AI, argued that the core problem lies in the concentration of decision-making power within a single private company. He said limiting access to such systems will help organizations determine which companies and countries can protect their infrastructure from emerging cyber risks. In an interview with Fortune magazine, Bengio said: “It makes no sense that private individuals are deciding the fate of other people’s infrastructure. What happens to all the companies and all the countries that don’t have access?”His statement comes as Mythos, an AI model that can identify thousands of previously unknown things. “Zero Day” The vulnerabilities have been selectively shared with a small group of primarily US-based companies and government agencies. Bengio warned that such an approach risks leaving large parts of the world’s ecosystem out of critical cybersecurity protections.Anthropic cited the dual-use nature of Mythos to justify its limited deployment. While this model can help identify vulnerabilities and harden systems, it can also be exploited to launch cyberattacks that disrupt critical infrastructure. To manage this risk, the company opted for a controlled release, initially granting access to some U.S. technology companies that provide the platform behind the widely used system, and briefing the U.S. government as it prepares to expand access to federal agencies.However, the move sparked a broader debate about governance and equity. According to reports, several governments and agencies are seeking access to the model to assess vulnerabilities within their own systems. For example, the Bank of England publicly stated that Anthropic had guaranteed short-term access to British banks. Meanwhile, discussions at the IMF and World Bank’s spring meetings were dominated by concerns that the model could expose weaknesses in the global financial system, especially given that many regulators and companies outside the United States have yet to evaluate its findings.
Calls for international oversight of advanced AI models like Anthropic’s Mythos grow
Bengio et al.’s views suggest that this scenario will require more public involvement in regulating AI on an international scale. He proposed the creation of an international body to regulate the production and use of highly sophisticated AI technologies. Bengio feels that governments should impose strict rules and regulations on companies to prevent the misuse of advanced AI technology from affecting other countries’ infrastructure.“We need an institution that actually oversees these kinds of decisions. As the power of AI continues to grow, this issue of international engagement becomes more pressing. There is no reason why AI should be limited to attacks on U.S. infrastructure or American citizens. So this has to be an international issue.” Bengio added.This discussion also “AI sovereignty” This comes as countries seek to reduce their dependence on foreign technology providers. Concerns are further amplified by geopolitical tensions and concerns that access to critical technologies may be affected by changes in national interests or policies.The U.S. government is also working to ensure entry. Bloomberg obtained a memo from the White House Office of Management and Budget stating that several federal departments, including the Department of Defense (Department of the Army), Department of Treasury, and Department of Homeland Security, will begin using versions of Mythos. The news comes even as Anthropic and the Department of Defense remain in a legal battle over their previous supply chain risk designations.Bengio warned of the dangers of using open source AI models in addition to proprietary platforms such as Mythos. This type of technology is generally considered advantageous because it can improve security through its openness and collaboration. However, AI has advanced enough to search for vulnerabilities in open source software.He also emphasized the importance of including China in the global AI governance framework, given the ongoing competition between the United States and China to develop advanced AI systems. Bengio estimated that the Chinese model may lag the U.S. model by several months, but said that gap does not significantly reduce the associated risks.Bengio’s criticism raises broader issues. As AI systems become better and more powerful, decisions about how they are used and who can use them will impact people around the world. Leaving these decisions in the hands of just one company could leave critical parts of the world unprotected and place too much power over critical infrastructure in the hands of a few, he says.
