Mass deepfakes and “disinformation” threaten to disrupt the 2024 US presidential election.
It comes as experts say it's “easier than ever” to create fake videos, images and audio recordings.
This is thanks to an artificial intelligence app that can digitally recreate a person's face and voice.
And it can be used to make someone appear to do or say something they haven't actually done.
Earlier this year, New Hampshire residents received phone messages from an AI voice “clone” of President Joe Biden urging them to refrain from voting.
Experts say the risk of misinformation reaching voters is now at its greatest.
“The rise of AI has made it easier than ever to create fake images, fake videos and doctored audio recordings that look and sound real,” said Christian Hetrick of the University of Southern California.
“As the election approaches, emerging technologies could flood the internet with false information and sway public opinion.” opiniontrust and action in our democracy.”
The problem is that AI apps make it incredibly easy to create fake content.
For example, audio can be replicated in a matter of seconds.
And recently, an expert told the US Sun that it's possible to tell what someone is saying from just one photo.
How to stay safe
The University of Southern California has released official guidelines on how to spot a deepfake.
The key is to verify information from multiple sources.
This is especially true if the video or image makes a particularly bold statement.
Also, political news It is a very emotional thing, so be sure to analyze it if it is false.
What are deepfakes and how do they work?
Here's what you need to know…
- Deepfakes are videos of fake people that look completely real.
- These are created using computers to create convincing recreations of events that never actually happened.
- Often this involves swapping one person's face for another's and making them say whatever they like.
- The process begins by feeding the AI hundreds, or even thousands, of photos of the victim.
- Machine learning algorithms swap out certain parts of each frame to spit out realistic-looking photos and videos that are actually fake.
- In one famous deepfake video, comedian Jordan Peele created a realistic video of former President Barack Obama calling Donald Trump a “moron.”
- In another example, Will Smith's face was pasted onto the character Neo from the action movie “The Matrix,” which Smith famously turned down in order to star in the flop “Wild Wild West,” but the role went to Keanu Reeves.
And make sure you double check what you're doing read Be careful before sharing as you may make the problem worse.
“Democracy depends on informed citizens and residents participating to the fullest extent possible and expressing their views and desires through the ballot box,” said Mindy Romero of the University of Southern California.
“The concern is that declining trust in democratic institutions could impede electoral processes, increase instability and polarization, and become a tool for foreign political interference.”
She added that “protecting yourself from disinformation can be difficult.”
Defending Against Deepfakes
Sean Keach, head of technology and science at The Sun and The US Sun, said…
The rise of deepfakes is one of the most worrying trends in online security.
Deepfake technology can create a video of you from just a single photo, so few are safe.
But while it may seem a bit hopeless, there is also an upside to the rapid rise of deepfakes.
First, awareness of deepfakes is now much greater.
This leads people to look for signs that a video may be faked.
Similarly, technology companies are investing time and money into software that can detect counterfeit AI content.
This means social media will be able to report fake content more frequently and with greater confidence.
As the quality of deepfakes improves, visual mistakes may become harder to spot, especially in a few years.
So your best defense is your own common sense: carefully monitor everything you watch online.
Ask whether this video is something that someone could have faked and who benefits from you watching this clip.
If you are told something frightening, or if a person says something unusual or pressures you to act, you may be watching a fraudulent clip.
Last month, experts told The US Sun that even “living an offline life” isn't enough to combat deepfakes because people can still find your photos, videos and audio clips.
Additionally, scammers are using AI to perpetrate a variety of fraudulent schemes, including romance scams.
Experts say that being able to spot mistakes in deepfakes is no longer enough, and other ways need to be found to address the threat.
This includes setting up “safe words” with friends and family and screening what you view online.