Over 200 leaders and experts demand global “red line” for AI use

Applications of AI


More than 200 well-known politicians, public figures and scientists have published letters calling for an emergency binding international “red line” to prevent the use of dangerous artificial intelligence (AI). This letter was released in line with the 80th session of the United Nations General Assembly (UNGA).

The glorious list of signatories included 10 Nobel Prize winners, eight former heads of state and ministers, and several major AI researchers. Over 70 organizations from around the world have joined the company, including Taiwan AI Labs, the European Foundation for Progress Research, AI Governance and Safety Canada, and the Beijing Academy of Artificial Intelligence.

“While AI holds great potential to advance human happiness, the current trajectory shows unprecedented danger,” read the letter. “We are urgently calling for the international red line to prevent the risk of unacceptable AI.”

Among the people who named the AI ​​call for attention was Nobel Peace Prize winner Maria Ressa, who published the letter in her opening speech during her high-level week of the UN General Assembly on Monday.

She warned, “Without AI safeguards, we could soon face epistemological chaos, design a pandemic, and face systemic human rights abuses.”

Ressa says, “History teaches us that cooperation is the only reasonable way to pursue national interests when faced with irreversible, borderless threats.”

A short letter, published on a dedicated site called “Red-Lives.ai,” raised the fear that AI could quickly “do far outweigh human capabilities,” and in doing so escalated the risks of widespread disinformation and personal manipulation. He argued that this could lead to domestic and international security concerns, mass unemployment and systemic human rights violations.

“Some advanced AI systems already show deceptive and harmful behavior, but these systems are given more autonomy to take action and make decisions around the world,” the letter warned. “Many experts, including those on the forefront of development, have warned that it will become increasingly difficult to exercise meaningful human control over the next few years.”

To address this challenge, various public figures and organisations who signed the letter were asked to act “before the window for meaningful intervention closes.”

Specifically, it suggested that an international agreement on clear, verifiable red lines built on and implementing on existing global frameworks and voluntary enterprise commitments is necessary to prevent these “unacceptable” risks.

“We urge governments to reach an international agreement on the red line of AI by the end of 2026.

According to the letter, a date not so far was chosen because the pace of AI development means that risks that were deemed speculative have already emerged.

“Walking longer could mean less technically and politically space for effective intervention, but the possibility of cross-border harm increases dramatically,” the signatories said. “That's why 2026 must be the year the world acts.”

Csaba Kőrösi, former president of the UN General Assembly, was one of the notable signatures of the letter, saying, “The humanity of that long history does not satisfy a higher intelligence than us. Within a few years, we will.

This sentiment was echoed by Ahmet Omchu, former director of a chemical weapons prohibition group.

Former Ireland presidents Mary Robinson and former presidents of Columbia Juan Manuel Santos also named the phone. In addition to these international leaders, there were Nobel Prize winners in chemistry, economics, peace and physics, as well as popular and award-winning authors such as Stephen Fry and Yuval Noah Harari.

“For thousands of years, humans have learned that powerful technology can have dangerous and beneficial results.Sapiens: A short history of humanity. It spent 182 weeks on the New York Times bestseller list. “With AI, you may not have the opportunity to learn from mistakes, as AI is the first technology that allows you to make decisions on its own, invent new ideas and escape control.”

He added, “Humans must agree to the clear red line of AI before technology can shape society beyond our understanding and destroy the foundations of our humanity.”

In addition to the timing of the opening of the latest UN General Assembly, the release of the letter fell by accident the same day Openai and Nvidia (NASDAQ: NVDA) announced a “groundbreaking strategic partnership” for the deployment of at least 10 gigawatts of NVIDIA systems and announcing a “groundbreaking strategic partnership” for a $100 billion investment from the neighbors of Openai's AI infrastructure cloak cloak.

The deal between the two world's largest players in the AI ​​space helped to highlight the urgency of the AI ​​Red Line Letter.

Possible red lines

The letter website provides some examples of what these hypothetical red lines look like in the context of AI, suggesting that they can focus on AI behavior (what AI systems can do) or AI use (how humans and organizations use such systems).

The site highlighted that the campaign does not support a specific red line, but provided some examples related to areas of most concern. This includes delegating the Nuclear Fire Department to AI systems, or important command and control decisions. The deployment and use of weapon systems used to kill humans without meaningful human control and accountability. Using AI systems for social scoring and mass surveillance. And uncontrolled releases of cyberattack agents that can disrupt critical infrastructure.

Regarding the feasibility of any of these controls, the site noted that certain red lines of AI behavior are already operating within the “safety and security” framework of AI companies, including humanity's responsible scaling policies, Openai's preparation framework, and Deepmind's frontier safety framework.

Return to the top ↑

Realistic goals

To further demonstrate that the letter's goals are reasonable, the site has given several more real-world examples from history showing that “international cooperation on high stakes risks is fully achievable.”

Two such incidents were treaties on the Non-Proliferation of Nuclear Weapons (1970) and the Biological Weapons Treaty (1975), negotiated and ratified during the height of the Cold War.

More recently, he also pointed out the 2025 “High Sea Treaty.” It noted that it “provides a comprehensive set of regulations for the protection of high seas and serves as a sign of optimism in international diplomacy.”

Return to the top ↑

If controlled, AI can be permanently powerful

The concerns raised by public figures came on the same day that UN Climate Director Simon Stiel interviewed the UK broadsheet, along with a call for increased regulations and protections.

Steill argued that if governments and authorities control AI, they can prove a “game changer” when it comes to fighting the climate crisis.

“AI is not a ready-made solution, it takes risks. But it can also become a gamechanger,” the UN Climate Director told the Guardian. “As done properly, AI releases and does not replace human capabilities. Most importantly, the power that drives real-world outcomes. It guides microgrid management, climate risk mapping, and resilient planning.”

Stiell's comments show that current international leaders want to see the right laws, regulations and controls, at least at the UN, as well as exploiting the appropriate laws, regulations and potential for positive change in AI.

For AI to work properly in the law and thrive in the face of growing challenges, it will need to integrate enterprise blockchain systems that guarantee the quality and ownership of data input. Check out Coingeek's report on this new technology to learn more about why enterprise blockchain is the backbone of AI.

Return to the top ↑

Watch: Shows the possibility of blockchain integration with AI

https://www.youtube.com/watch?v=p9m7a46s8bw title = “youtube video player” frameborder = “0” lock = “accelerometer; autoplay; clipboard-write; clipped-media; gyroscope; picture-in-picture” referrerpolicy = “strict-origin-when-cross-origin” approadlscreen = “”>>>



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *