Last updated:
The girl later admitted her mistake and was consulted by police. She was warned not to post misleading content related to self-harm.

The incident took place in Agra’s Fatehpur Sikri. (Representative image)
A 17-year-old girl from Agra’s Fatehpur Sikri recorded a “fake” suicide attempt video and posted it on social media to gain more followers. Thursday’s incident triggered an alert from Mehta’s AI surveillance system, after which police rushed to her home.
Police said the video showed the boy drinking liquid from the bottle and then collapsing. However, the content of the clip was flagged by Mehta’s AI system as a possible suicide attempt, and an immediate alert was sent to the relevant authorities to take action.
Shortly after, a cell on social media alerted local police about the incident, and the girl’s whereabouts were traced. It was later revealed that it was a “staged” video and the girl was unharmed. She only ingested water and no toxic substances. After admitting her mistake, the girl received counseling from police and was warned not to post misleading content related to self-harm on social media.
How does meta-AI work?
Facebook and Instagram’s Meta AI and other safety technologies identify content related to potential crimes and self-harm through a combination of proactive artificial intelligence monitoring, machine learning, and human review. After these systems detect an imminent risk or illegal activity, they take a variety of actions, from displaying resource helplines to notifying law enforcement.
Providing resources
If a person expresses suicidal thoughts, it is important to seek help as soon as possible. Suicide prevention resources available on Facebook and Instagram were developed with input from people with personal experience and in collaboration with leading mental health organizations.
With the help of machine learning technology, Meta has expanded its ability to identify potentially suicidal and self-harm content. Several countries are using this technology to provide timely assistance to people in need.
The technology uses pattern recognition signals, such as phrases or comments of concern, to identify possible distress.
“We use artificial intelligence to prioritize the order in which our teams review reported posts, videos, and live streams. This allows us to efficiently enforce policies and get resources to people quickly. It also allows our reviewers to prioritize and rate urgent posts and contact emergency services when community members may be at risk. Speed is critical,” said Mehta.
The content is then escalated to our community operations team to determine whether it violates our policies or recommends contacting local emergency responders.
How to ask for help immediately
Meta’s technology to identify possible suicide and self-harm is integrated into both Facebook and Instagram posts, as well as Facebook and Instagram Live.
If someone is contemplating self-harm during a live video, those watching can contact that person directly or report an issue.
Once reported, it will be reviewed by a member of our community management team.
“In serious cases, we work with emergency services to carry out health checks. Thanks to Meta technology, emergency responders can now quickly reach people in need,” Mehta said.
Disclaimer: If you or someone you know needs help, please call any of the following helplines: Aasra (Mumbai) 022-27546669, Sneha (Chennai) 044-24640050, Sumaitri (Delhi) 011-23389090, Cooj (Goa) 0832- 2252525, Jeevan (Jamshedpur) 065-76453841, Prateeksha (Kochi) 048-42448830, Maitri (Kochi) 0484-2540530, Roshni (Hyderabad) 040-66202000, Lifeline 033-64643267 (Kolkata)
delhi, india, india
March 8, 2026, 08:30 IST
read more
