Explainer – What are the risks if advanced AI models fall into the wrong hands?

AI News


Written by Alexandra Alper

WASHINGTON (Reuters) – The Biden administration is poised to open a new front in efforts to protect U.S. AI from China and Russia, with preliminary plans to install guardrails around cutting-edge AI models, Reuters says. reported on Wednesday.

Government and private sector researchers believe that U.S. adversaries can use this model, which mines vast amounts of text and images to summarize information and generate content, to launch offensive cyberattacks and create powerful There are concerns that it could be used to create biological weapons.

Here are some of the threats posed by AI.

Deepfakes and misinformation

Deepfakes (realistic but fabricated videos created by AI algorithms trained on massive amounts of online footage) surface on social media, blurring fact and fiction in the polarized world of American politics. It has become.

Synthetic media like this has been around for a few years, but it's only gotten better in the past year with a slew of new “generative AI” tools, such as Midjourney, that make it cheap and easy to create convincing deepfakes. was further strengthened.

Artificial intelligence-powered image creation tools from companies like OpenAI and Microsoft can promote election- and voting-related disinformation, despite each having policies prohibiting the creation of misleading content. It could be used to create sexually explicit photos, researchers said in a March report.

Some disinformation campaigns simply leverage AI's ability to mimic real news articles as a way to spread false information.

Major social media platforms such as Facebook, Twitter, and YouTube are working to ban and remove deepfakes, but their effectiveness in cracking down on such content varies.

For example, last year, a Chinese government-controlled news site using a generative AI platform released a report that previously circulated that the United States was operating a laboratory in Kazakhstan to produce biological weapons to use against China. and disseminated false claims to the Department of Homeland Security (DHS). ) stated in the 2024 Homeland Threat Assessment.

Speaking at an AI event in Washington on Wednesday, National Security Adviser Jake Sullivan said the capabilities of AI and the “intentions of state and non-state actors to use disinformation at scale to wreak havoc” He said there is no easy solution to this problem because it is a combination of factors. Promote democracy, promote propaganda, and shape world perception. ”

“Right now, offense is way ahead of defense,” he said.

biological weapon

U.S. intelligence agencies, think tanks, and academics are increasingly concerned about the risks posed by foreign bad actors gaining access to advanced AI capabilities. Researchers from Gryphon Scientific and Rand Corporation pointed out that advanced AI models can provide useful information for creating biological weapons.

Gryphon discusses how large-scale language models (LLMs), computer programs that extract from large amounts of text and generate responses to queries, are used by adversarial actors to harm the realm of life sciences. investigated the possibilities and found that they “could provide information that may assist.” Malicious actors create biological weapons by providing useful, accurate and detailed information along every step of this pathway. ”

For example, we have found that an LLM has the potential to provide post-doctoral level knowledge for troubleshooting problems when dealing with a virus with pandemic potential.

RAND research has shown that LLM can be useful in planning and executing biological attacks. They found that LLM could propose an aerosol delivery method for botulinum toxin.

cyber weapons

In its 2024 Homeland Threat Assessment, DHS cited new tools to “enable larger, faster, more efficient, and more evasive cyberattacks” against critical infrastructure such as pipelines and railroads. He said there is a high possibility that cyber attackers will use AI for “development.”

DHS says China and other adversaries are developing AI technologies that could undermine U.S. cyber defenses, including generative AI programs that support malware attacks.

Microsoft said in a February report that hacking groups affiliated with the Chinese government, the North Korean government, Russian military intelligence, and Iran's Revolutionary Guard Corps were using large-scale language models to perfect their hacking campaigns. He said he was tracking.

New efforts to address threats

A bipartisan group of lawmakers late Wednesday announced legislation that would make it easier for the Biden administration to impose export controls on AI models to protect valuable American technology from foreign bad actors.

The bill, introduced by House Republicans Michael McCaul, John Molenaar, and Max Wise, and Democrat Raja Krishnamoorthi, would allow Americans to collaborate with foreigners to identify AI that poses a risk to U.S. national security. It would give the Department of Commerce explicit authority to prohibit the development of systems.

Tony Samp, an AI policy adviser in Washington, said Washington policymakers are trying to “promote innovation and avoid heavy-handed regulations that stifle innovation” in an effort to address the many risks posed by the technology. Ta.

However, he said, “Crackdowns on AI development through regulation could impede potential breakthroughs in areas such as drug discovery, infrastructure and national security, potentially ceding ground to overseas competitors.” ' he warned.

(Reporting by Alexandra Alper; Editing by Anna Driver and Stephen Coates)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *