The AI video shows an ecstatic Narendra Modi, wearing a trendy jacket and trousers, grooving on stage to a Bollywood song as the crowd cheers. The Indian Prime Minister reshared the video on X and said, “It's really nice to be able to display such creativity at the peak of voting season.”
Another video set in the same setting shows Mr Modi's rival Mamata Banerjee dancing in a sari-like outfit, but what is playing in the background is a video of Mr Modi's departure from the party. This is part of her speech criticizing those who have joined the party. State police have launched an investigation into the video, saying it “may affect law and order.”
Mixed reactions to videos created using artificial intelligence (AI) tools show how the use and abuse of artificial intelligence (AI) tools is on the rise as the world's most populous country conducts a huge general election This highlights the concerns that regulators and public safety authorities are facing.
Easily produced AI videos with near-perfect shadows and hand movements can mislead even digitally literate people. But the risks are even higher in a country where many of the country's 1.4 billion people are not tech-savvy and manipulated content is likely to stoke sectarian tensions, especially around election time.
A World Economic Forum study released in January found that the risk to India from misinformation over the next two years is likely to be higher than the risk from infectious diseases or illicit economic activity.
“India is already at great risk of disinformation. When AI is involved, disinformation can spread 100 times faster,” said some commentators on the use of AI in India's elections. said Sagar Vishnoi, a New Delhi-based consultant who advises political parties.
“Elderly people who are not tech-savvy are increasingly falling for fake narratives powered by AI videos. This can have serious consequences, including creating hatred against communities, castes and religions there is.”
The 2024 national election, which will be held over six weeks and end on June 1, will be the first election in which AI will be deployed. The first examples were innocuous and limited to a few politicians using technology to create video and audio to personalize their campaigns.
But in April, serious cases of abuse made headlines, including a deepfake of a Bollywood actor criticizing Mr. Modi and a fake clip involving two of Mr. Modi's close associates that led to the arrest of nine people.
difficult to counter
The Election Commission of India last week warned political parties against using AI to spread misinformation and shared seven provisions of information technology and other laws, including those for fabrication, rumours, and incitement to enmity. It was announced that the crime would be punishable by up to three years in prison.
A senior national security official in New Delhi said authorities were concerned that fake news could cause unrest. The easy availability of AI tools makes it possible to fabricate such fake news, especially during election periods, and difficult to counter, officials said.
“We don't have the (adequate monitoring) capabilities… It's difficult to keep track of the ever-evolving AI environment,” the official said.
“Social media cannot be completely monitored and people have forgotten how to manage content,'' said a senior election official.
They declined to be named because they were not authorized to speak to the media.
AI and deepfakes are increasingly being used in elections around the world, including in the US, Pakistan, and Indonesia. A recent video that went viral in India shows the challenges facing authorities.
Over the years, India's IT ministry committee has set up committees that, either at its discretion or upon receiving complaints, order the blocking of content that it deems to be likely to harm public order. During this election, pollsters and police forces across the country deployed hundreds of officers to spot problematic content and request its removal.
Although Prime Minister Modi's response to the AI dance video was mild, saying, “It was fun to watch myself dance,'' police in Kolkata, West Bengal state, are investigating X user SoldierSaffron7 for sharing the Banerjee video. It started.
Dulal Saha Roy, a cybercrime officer in Kolkata, shared a type of notice with X asking users to delete the video or “be liable for severe penalties”.
“No matter what happens, I won't delete it,” said the user Reuters Through X's direct message, he refused to share his phone number or real name for fear of police action. “They can't be traced [me]”
election official said Reuters Authorities can only tell social media platforms to remove content, and confusion ensues if the platforms claim the posts do not violate internal policies.
Vigur
Mr Modi and Ms Banerjee's dance videos, which have been viewed 30 million and 1.1 million times respectively on X, were created using the free website Viggle. This site uses your photo and some basic prompts detailed in the tutorial to create a video of the person in the photo dancing or doing other real-life movements within minutes. Can be generated.
Viggle co-founder Hang Chu and Banerjee's offices did not respond. Reuters query.
Apart from the two AI dance videos, another 25-second Viggle video circulating online shows Banerjee appearing in front of a burning hospital and using a remote control to blow it up.
This is a scene from the 2008 movie that was modified using AI. dark Knight, which shows Batman's nemesis the Joker wreaking havoc. The video post has been viewed 420,000 times.
West Bengal Police believe it is a violation of India's IT laws, but an email notification sent by X to users says that X “strongly believes in protecting and respecting the voice of our users.” It is said that no measures have been taken. Reuters examination.
“They can't do anything to me. I didn't accept it. [notice] Seriously,” said the user Reuters Via X Direct Message.