- Meryl Sebastian
- BBC News, Kochi
Last November, Muralikrishnan Chinnadurai was watching a live stream of a Tamil event in the UK when he noticed something strange.
The speech was given by a woman introduced as Dwaraka, the daughter of Tamil Tiger militant leader Velupillai Prabhakaran.
The problem is that Dwaraka had been killed in an airstrike more than a decade ago, in the final stages of Sri Lanka's civil war in 2009. She was 23 at the time and her body was never found.
And now, seemingly a middle-aged woman, she is calling on Tamils around the world to take up the political struggle for freedom.
Chinnadurai, a fact checker from the southern Indian state of Tamil Nadu, watched the video closely, noticed glitches in it and quickly realised that the numbers were generated by artificial intelligence (AI).
Chinnadurai said the potential problem was immediately apparent: “This is an emotive issue for the state. [Tamil Nadu] With the election approaching, misinformation can spread quickly.”
As elections approach in India, it's inevitable that there will be a wealth of AI-generated content, from campaign videos to personalized audio messages in different Indian languages, to even automated calls to voters in the voices of candidates.
Content creators like Shahid Sheikh are even having fun using AI tools to show Indian politicians in never-before-seen avatars, wearing athleisure wear, playing music and dancing.
But as the tools become more sophisticated, experts worry about the impact they could have when it comes to making fake news look real.
“Rumors have always been part of electioneering. [But] “In the age of social media, it can spread like wildfire,” said SY Qureshi, the country's former chief electoral commissioner.
“It could actually set the country on fire.”
Indian parties are not the first in the world to take advantage of the latest AI trends: just across the border, Pakistan has allowed jailed politician Imran Khan to speak at rallies.
And even within India, Prime Minister Narendra Modi is already taking full advantage of new technology to effectively campaign, addressing audiences in Hindi and using a government-developed AI tool called Basini to translate his speeches into Tamil in real time.
But it can also be used to manipulate words and messages.
Last month, two videos of Bollywood stars Ranveer Singh and Aamir Khan campaigning for the opposition Indian National Congress party went viral, and both men filed police complaints alleging that the videos were deepfakes made without their consent.
Then, on April 29, Prime Minister Modi expressed concern that AI was being used to distort speeches by senior ruling party figures, including himself.
The next day, police arrested two people, one each from the opposition AAP and Indian National Congress, in connection with a doctored video of Home Minister Amit Shah.
Modi's Bharatiya Janata Party (BJP) has faced similar accusations from opposition leaders across the country.
The problem, experts say, is that despite arrests, comprehensive restrictions are not being implemented.
According to data and security researcher Srinivas Kodali, that means “if you are caught doing something wrong, you may get away with a light punishment at best.”
Without regulation, creators told the BBC they would have to rely on personal ethics to decide what work they would or wouldn't do.
The BBC has learned that requests from politicians included doctored pornographic images, videos and audio to damage the reputations of rivals.
“We have been approached to create original videos, like deepfakes, because if the original videos are shared widely, it can bring criticism to politicians,” Divyendra Singh Jadun reveals.
“So his team wanted me to create a deepfake that could be passed off as the original.”
Jadoon, founder of The Indian Deepfaker (TID), which developed a tool using open-source AI software to create campaign materials for Indian politicians, insists on including disclaimers on everything he creates to make it clear that they are not real.
But it's still hard to control.
Image source, Getty Images
Sheikh, who works at a marketing company in the eastern state of West Bengal, has seen his work shared by politicians and political pages on social media without permission or credit.
“A politician used an image of Modi I created without any context or mentioning that it was created using AI,” he said.
And deepfakes are so easy to create now that anyone can do it.
“What previously took seven or eight days to create can now be done in three minutes,” Jadoun explains. “All you need is a computer.”
Indeed, the BBC has seen first-hand how easy it is to create a fake phone call between two people, in this case myself and former US President Donald Trump.
Despite these risks, India initially said it wasn't considering AI legislation, but then took action in March this year after a furor over Google's Gemini chatbot's response to the question, “Is Modi a fascist?”
The country's Additional Secretary of State for Information Technology, Rajeev Chandrasekhar, said the act violated the country's IT Act.
The Indian government has since asked technology companies to seek explicit government permission before releasing “unreliable” or “inadequately tested” generative AI models or tools, and has also warned against reactions from such tools that “threaten the integrity of the electoral process.”
But that's not enough: Fact checkers say continually debunking such content is a difficult task, especially during election season, when misinformation is at its peak.
“Information travels at 100 kilometres per hour,” said Chinnadurai, who runs a media watchdog in Tamil Nadu state. “Any false information we spread will travel at 20 kilometres per hour.”
And this misinformation is finding its way into the mainstream media, Kodali said, despite the “Election Commission remaining publicly silent about AI.”
“There are no rules across the board,” Kodali said. “Instead of creating actual regulations, they're leaving it up to the tech industry to self-regulate.”
Experts say there is no perfect solution in sight for now.
“but [for now] “If action is taken against people forwarding misinformation, it may make others afraid to share unverified information,” Qureshi said.
