NEW DELHI: A man sits on the porch outside his house as the evening light fades. Suddenly, a growling tiger appeared from behind the fence, lunged at him, grabbed him by the neck and dragged him into the wild.
This horrifying video, allegedly recorded by a surveillance camera from a forest area in Chandrapur, western India’s state of Maharashtra, went viral in November 2025. Many people shared it on X and WhatsApp, racking up millions of views.
However, it turns out that this attack did not actually occur and the video was generated by artificial intelligence (AI).
On November 7, local authorities debunked the fake video, describing it as an attempt by “anti-social forces” to create fear in a region that has seen an increase in human-tiger conflicts in recent years, and urged the public not to accept such videos as depictions of real events.
Few people carefully spotted the prizes in the video. These included how the tiger’s movements stirred up dust but failed to stir up dead leaves lying everywhere.
This video is one of many AI-generated wildlife videos that blur the line between reality and fiction on an unprecedented scale in India in 2025.
In one of the images, a leopard can be seen attacking a moving train and pulling out a passenger standing next to the door of the carriage. In another photo, a tiger chases tourists on a safari in a national park. And a third video shows a leopard running around a shopping mall in Mumbai, causing panic among visitors.
In pursuit of seemingly innocuous clicks and likes for uploaders, these AI-generated videos are causing real-world damage and panic. Experts warn that they not only misinform and spread fear, but also distort the public’s understanding of animals’ real behavior, exacerbating the threat of human-animal conflict in India.
Fake videos of big cats preying on humans risk inflaming public anger against these animals, undermining years of painstaking work by conservationists to ensure coexistence between animals and humans.
Dana Wilson, director of marketing and communications at Wildlife SOS, an Indian wildlife rescue and conservation organization, said: “These false situations mislead people about the presence of animals in the area, putting people in the area at risk.”
“So we find that a fake event can perfectly stoke the flames of an actual revenge killing,” he told The Straits Times.
Human-animal conflict is a recurring reality in India.
Rapid urban expansion is encroaching on forested habitats and creating conflicts between humans and animals. In 2024, tigers killed 73 people, and leopard attacks killed many more in rural and urban areas across the country. Wild animals such as tigers and leopards have also been bludgeoned to death.
In this situation, artificial videos of big cats in areas where they have actually been sighted are no longer considered harmless hoaxes, but a real threat. Several times, they even caused unnecessary fear and forced the authorities to launch rescue operations, wasting time and resources.
In September 2025, a 22-year-old journalism student from Ruchi Khand, Lucknow used AI to add a leopard to a selfie of her on her balcony and spread the photo on WhatsApp, suggesting that the animal was roaming in her neighborhood.
This prank spread quickly and caused havoc.
The local forest department sent nine teams to track the leopard, but despite hours of patrolling and examining CCTV footage, no trace of the leopard was found.
Authorities eventually tracked the images to the student, who was taken into custody but released with a warning.
Distributing AI-generated or manipulated videos of wild animals that cause panic or mislead people could face legal action, authorities have warned. However, legally addressing fake videos has proven difficult.
India does not have a single dedicated law specifically for fake AI videos, and efforts are further complicated by rapidly advancing technology and limited liability for platforms.
In October 2025, the Indian government proposed amendments to the Information Technology Regulations that would make it mandatory for social media platforms to label AI-generated content to address growing concerns about deepfakes and misinformation.
However, these rules have not yet been formally notified as law.
Many fear that the inability to distinguish between real threats and fictional narratives diminishes the urgency to address conservation challenges.
“If all of a sudden there’s a million fake videos of leopards invading the city, the impact of a real event is much smaller,” Wilson said. “It’s basically desensitizing people to the real situation.”
And in the worst-case scenario, people may believe everything they see online, including fake videos, or disbelieve real footage, he added, potentially impacting wildlife conservation efforts and fundraising efforts.
Another fake video that gained attention in October 2025 was of a drunk man near Pench Tiger Reserve in Madhya Pradesh.
He has also been seen petting the tiger and offering it a drink from a plastic bottle, suggesting there is a risk of being approached and petted by wild animals such as big cats.
Reacting to the spate of fake tiger videos, Pench and other tiger reserves in Madhya Pradesh issued a joint statement on November 7, 2025, saying such misleading videos “not only distort the image of wild animals but also belittle the honest work of those who protect them.”
Rajnish Kumar Singh, deputy director of Pench Tiger Reserve, told ST that there is currently no risk of such videos in Pench escalating the human-animal conflict as few people living in or around the reserve have access to mobile internet.
He pointed out that these fake videos usually originate in urban environments, first circulate among urban consumers, and are largely dismissed as fake by the time they reach remote areas.
Pench and its surrounding areas are home to various marginalized indigenous groups, including the Gond and Baiga communities.
But Rajnish Kumar worries that this could change in the coming years as AI technology becomes more sophisticated, it becomes harder for most people to tell fact from fiction, and people in remote forest areas get smartphones and use the internet for the first time.
“It can cause chaos in the village without anyone realizing that the video is fake,” he says.
Experts believe that the only effective way to address this challenge is to improve media literacy to help people recognize fake videos, especially in an increasingly digital world where law enforcement agencies struggle to deal with such threats.
Tips include looking out for inconsistent shadows or blurry graphics, and checking videos with reliable sources. “Education is the only solution,” Rajnish Kumar said. “People need to be smart enough to realize this is fake.” – The Straits Times/ANN
