How AI can spot fake news and what policymakers can do

AI News

With the rise of AI, the internet is flooded with disinformation about the election. Below are some examples of fake photos of former President Trump and President Biden generated by AI. (Image: Generated by AI by staff at the USC Price School)

Earlier this year, New Hampshire voters received phone messages sounding like President Joe Biden's voice urging them not to vote in the state's primary election. However, the caller's voice wasn't Biden's; it was an automated call that used artificial intelligence to deceive and impersonate the president.

The rise of AI has made it easier than ever to create fake images, fake videos and altered audio recordings that appear real. With an election looming, this emerging technology is flooding the internet with disinformation, threatening to shape public opinion, trust and behavior in our democracies.

Mindy Romero
Mindy Romero

“Democracy depends on informed citizens and residents participating as much as possible and expressing their opinions and needs through the ballot box,” said Mindy Romero, director of the Center for Inclusive Democracy (CID) at the University of Southern California's Price School of Public Policy. “The concern is that declining trust in democratic institutions could impede the electoral process, foster instability and polarization, and become a tool for foreign political interference.”

Romero recently hosted a webinar titled “Elections in the Age of AI,” where experts discussed how to spot AI-generated disinformation and how policymakers can regulate this emerging technology. Panelists included University of California, Berkeley presidential public scholar David Evan Harris, Brennan Center's Elections and Government Program Advisor Mekela Panditharatne, and California Common Cause Executive Director Jonathan Mehta Stein.

Here are some tips and policy suggestions to combat AI-generated disinformation.

How to Recognize and Ignore Disinformation

  • Be skeptical. Romero said it's not a bad thing to be skeptical of political news in general — if the news seems less than accurate, sensational or evokes strong emotions, that should be a red flag.
  • Check multiple sources. If you see an image or video that makes someone's point too perfectly, supports a conspiracy theory or attacks a candidate, Stein said, take a moment before sharing it.

    “We live in a time where you have to not believe it, not retweet it, not share it, you have to double-check it,” he said. “You're going to have to Google it. You're going to have to see if it's been reported in other sources, see if it's been proven false.”

  • Use news from trusted sources. Getting information from trusted sources is one way to fight disinformation, Romero said, and people should also determine whether an article is news or opinion.

    “Fighting disinformation can be difficult. It's hard work,” Romero added. “In general, the field is calling for conversations about how governments and policymakers can act to help communities.”

What policymakers can do

Fake image of former President Obama. (Image: Generated by AI by staff at the USC Price School)

As U.S. policymakers try to tackle AI-generated disinformation, they could take inspiration from Europe, where the European Union's Digital Services Act requires tech companies with large online platforms to assess the risks their products may pose to society, including elections and democracy, Harris said.

“Then the companies have to propose mitigation plans and invite independent auditors to audit their risk assessment plans and mitigation plans,” Harris added. He noted that European law also requires tech companies to give independent researchers access to the data to study how their products affect societal issues such as democracy and elections.

According to Stein, dozens of bills have been introduced in California that seek to regulate AI. One notable proposal would require AI-generating companies to embed data within the digital media they create. This would allow online users to know which images, videos and audio were generated by AI, when they were created, and who created them. The bill would also require social media platforms to use that data to flag AI counterfeits.

“So if you're scrolling through Twitter or Facebook or Instagram and something is generated by an AI, under the bill it would have to have a little tag somewhere that says it's generated by an AI,” Stein said.

At the federal level, there are bills being introduced in Congress that would regulate the use of AI in political advertising and issue guidelines to local election offices on the impact of AI on election management, cybersecurity and disinformation, Panditaratne said. The federal government has also released guidance on managing the risks of generative AI, including information that may be relevant to election officials.

“But so far we haven't seen any guidelines specifically addressing the use of AI by election officials,” Panditharatne said. “That's a gap that we think is important to fill.”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *