Federal intelligence agencies warn that generated AI poses a threat to election security

AI Video & Visuals


driving force artificial intelligence This could threaten the security of this November's election, intelligence agencies warned in a new federal filing.

Generative AI uses images, audio, video, and code to create new content, such as so-called “deep fake” videos that make it appear as if a person is saying something they never said.

Domestic and international actors could use this technology to create serious challenges for the 2024 election cycle, according to an analysis compiled by the Department of Homeland Security and sent to law enforcement agencies across the country. . Federal bulletins are frequently issued messages to law enforcement partners intended to call attention to specific threats or concerns.

“A variety of threat actors will likely seek to leverage augmented media, generative artificial intelligence (AI), to influence and sow discord during the 2024 U.S. election cycle, and AI tools will may be used to intensify efforts to thwart the It said it was shared with CBS News. “As the 2024 election cycle progresses, generative AI tools will increase opportunities for domestic and international threat actors to interfere by exacerbating emergencies, disrupting election processes, and attacking election infrastructure.” There is a possibility.”

Director of National Intelligence Avril Haines also warned Congress about the dangers of generative AI during a Senate Intelligence Committee hearing last week, saying AI technology could create realistic “deepfakes” that can hide their origins. said.

“Innovations in AI have enabled foreign influencers to generate seemingly authentic and customized messages more efficiently and at scale,” he said, adding that the United States is now more capable than ever before. claimed to be ready for elections.

Director of National Intelligence Avril Haines testifies before the Senate Armed Services Committee on May 2, 2024 in Washington, DC.

Win McNamee/Getty Images


One example cited by DHS in its bulletin is Fake robocall imitating President Joe Biden's voice The night before the New Hampshire primary in January. The fake audio message went viral, urging recipients to “save their votes” for the November general election instead of participating in their state's primary.

“The timing of” AI-generated media specializing in elections “Rebutting or debunking false content that is pervasive online can take time and can be as important as the content itself,” the bulletin said.

The memo also stated: The threat remains overseas, he added. In November 2023, an AI video urged people to vote for a particular candidate on election day in a southern Indian state, and authorities gave them no time to discredit the video.

The bulletin also warns of the potential use of artificial intelligence to target election infrastructure.

“Generative AI can also be leveraged to enhance attack plans if threat actors, particularly violent extremists, seek to target U.S. election symbols or critical infrastructure,” the report said. . “This includes understanding U.S. elections and related infrastructure, scanning internet-connected election infrastructure for potential vulnerabilities, identifying and aggregating lists of election targets and events, and developing new or improved attack tactics.” This may include providing specific guidance.”

DHS says some violent extremists are even experimenting with AI chatbots to fill gaps in guidance on tactics and weapons, but the department says violent extremists can use election-related targeting information. He said the technology has not yet been observed to be used to supplement.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *