WASHINGTON: The proliferation of images and video footage of Middle East wars created or manipulated by artificial intelligence is eroding trust in real news, a media analyst has warned.
A recent example is a video showing Dubai’s Burj Khalifa collapsing in a cloud of dust. The video was viewed more than 12 million times before a crowd-sourced verification system debunked it.
Conversely, amid online speculation that Prime Minister Benjamin Netanyahu may have been killed or injured in an Iranian attack, Israel released three videos of the prime minister, including one in a coffee shop.
Social media was flooded with claims that the footage was fake because Netanyahu appeared to have six fingers, a common sign of AI-generated images. “Last I checked, humans don’t normally have six fingers, but AIs do,” one post on X said, garnering nearly 5 million views. “Is Prime Minister Netanyahu no more?”
In fact, the footage was real and the “extra finger” was a trick of the light.
“The rise of AI deepfakes and the disregard for real footage are two sides of the same coin,” said Sophia Rubinson of the misinformation watchdog NewsGuard. “When everything can be fake, it’s easy to believe anything is fake.”
Technology platforms are saturated with so-called “AI slop.” Surrealistic AI fabrications drown out the real picture, eroding trust.
According to the Institute for Strategic Dialogue in London, the X account that posted AI content about war has been viewed more than 1 billion times.
The Meta oversight board, which reviews Facebook’s content moderation, said: “We believe technology platforms do not do a good enough job of helping users identify whether content is AI-generated or authentic.” “Fake content can be harmful, inciting further violence and inciting further conflict.”
