Will AI hurt Scottish elections after John Swinney's deepfake video?

AI Video & Visuals


But hundreds of thousands of people on social media instead saw clips of the broadcast manipulated by artificial intelligence (AI).

It looked like Swinney, it sounded like Swinney, and everything looked like it was from a legitimate Sky broadcast. But it was completely fake.

The video, widely shared by figures including GB News contributor Lee Harris and Spectator columnist Gareth Roberts, illustrates a point made a week ago by senior computer science professor Wendy Hall. ing.

Mr Hall, who co-chaired the UK government's 2017 AI review, argued that it was “out of the bag”, meaning it was too late to protect this year's general election from the risk of AI-generated disinformation.

To find out more, the Sunday National spoke to two experts in the field: Dr Keegan McBride, Lecturer in AI, Government and Policy at the Oxford Internet Institute, University of Oxford, and Nisreen, Senior Lecturer at Royal Holloway University.・I spoke to Dr. Ameen. PhD in London, Vice-President of the British Academy of Information Systems.

Here's what they said about AI disinformation, its potential impact on democracy, and how to spot it yourself.

Was it easy to make a fake John Swinney video?

The National: Newly elected Scottish National Party leader John Swinney in the garden lobby of The National.

The AI-generated video of Swinney appears to be from a random anonymous account on Twitter/X. It was first posted in the comments of another thread and then amplified in a new post by a right-wing figure.

It doesn't seem like it took a lot of effort on anyone's part or it was a huge project. So was it easy to make? The answer from both experts was the same: “Yes.”

“There are different ways to do it, and some are more complex than others,” McBride said.

“But the truth is, if you want to do something like this, it's not difficult. If you're technically good, you'll figure it out pretty quickly.”

And if someone is trying to spread an AI deepfake using only audio, this Keir Starmer's fake recording This was announced on the day the Labor Party conference began last October, but it would be easier.

Could AI deepfakes have a significant impact on democratic elections?

The jury is out on whether AI deepfakes will have a significant impact on elections in the UK and around the world.

McBride argued that just because AI deepfakes are easy to create doesn't necessarily mean they're easier to influence.

Oxford University experts pointed to research by colleagues that found that “when it comes to misinformation and disinformation, it's actually demand, not supply, that matters.”

He continued, “So it doesn't matter if you can create 10 million AI-generated videos. You need an audience.”

The National: Kiev Mayor Vitali Klitschko's AI deepfake fooled European leaders and made headlines in 2022In 2022, Kiev Mayor Vitali Klitschko's AI deepfake made headlines by fooling European leaders

McBride argued that AI videos pose a greater risk if certain people are specifically targeted.

“We've seen world leaders fooled by deepfakes,” he says. “Either they were talking to a fake African Union leader, as the Estonian prime minister was fooled, or [Mayor of Kyiv Vitali] Klitschko, his deepfakes were calling all over Europe. ”

“I think it's more dangerous,” McBride added. “I think there is a real potential for it to have an impact, but I don’t think it will erode elections or the democratic process.”

But Ameen disagreed with that analysis. She argued that “AI algorithms can analyze user data, identify specific individuals, and tailor information and misinformation campaigns based on their biases.”

On a larger scale, this could have a huge impact, she warned.

Read more: ‘Deeper understanding’ of AI risks needed before new laws – Minister

“I think it's really important to make sure we educate the public about the dangers of AI,” Ameen said. “If we don’t manage it effectively…I think it does, and it can lead to a really dangerous situation.

“We have billions of people eligible to vote in elections. [in 2024]. Therefore, there is definitely an important message for the public to critically evaluate content before accepting that it is actually true.

“When it comes to vote manipulation, there can be a lot of problems, for example undermining trust, and when it comes to spending money, of course there is a risk of distorting the results.”

How can you tell if a video is an AI deepfake?

Mr Ameen and Mr McBride told the Sunday National that it was becoming increasingly difficult to spot fake AI.

As technology advances rapidly, common tips for finding AI-generated content are becoming less useful. For example, observing people's hands used to be a good idea because AI was notoriously bad at generating fingers accurately.

However, the two experts shared some top tips on how to spot an AI fake.

  • Notice what's in the background of the video. Maybe the physics are a little off? Are there objects that don't make sense or aren't arranged the way they would in the real world?

  • Pay attention to the boundaries of things. Are they unnaturally mixed?

  • Pay close attention to the text on name plates, store signs, water bottles, etc. Is that an actual word or some kind of gibberish?

  • Listen to what's being said. Sound too perfect? Or does it include the natural “hmmms”, “mistakes” and “repetitions” that are common in normal human conversation?

  • Observe the lips and mouth of the person speaking. Does the sound match the visuals?

  • Look carefully at their facial expressions and movements. Does anything seem unusual or unnatural?

  • Pay attention to the lighting in the video. Are the angles and shadings incorrect? Are there any other “visual artifacts” that can be discovered?

  • Check the source. Make sure the video comes from a reliable news outlet or source before taking it at face value.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *