After receiving a content takedown notice from the Delhi Police, Mehta deleted two reels posted by the Indian National Congress (INC) on Facebook and Instagram. The AI-generated video allegedly showed Prime Minister Narendra Modi and Adani Group chief Gautam Adani.
According to disclosures uploaded by Meta to the Lumen transparency database and seen by HT, the company received a notice on December 18 and restricted access to two Reels in India on the same day. Content posted from INC's verified accounts on Facebook and Instagram.
In a submission to Lumen, Mehta said the Delhi Police had invoked the provisions of Section 66C of the India Act, 2023, in addition to Section 79(3)(b) of the India Act, 2023 and Section 3(1)(d) of the IT Rules.
“The risks of failure to comply with the notice were the loss of safety zones for certain items and possible criminal penalties against Meta's local employees,” the company said in Lumen's filing, referring to the protection of intermediaries under Section 79 of the Information Technology Act. The company also said it has notified affected users of the action.
Graf is needed in Parliament to order video taken down following application from Adani Group in Ahmedabad court
Meta noted in Lumen's filing that Reel did not violate community standards. A Meta spokesperson said the content was restricted “once we receive a valid blocking order” from law enforcement. The Platform will remove content that violates its own community standards or local laws. In this case, Mehta took action considering the legal obligations attached to the Delhi Police notice and the risk of losing safe harbor protection.
INC and BJP's official Instagram accounts regularly post AI-generated content, much of it in the form of memes. The recent takedown notices come as the government seeks better disclosure regarding AI-generated content.
The Ministry of Electronics and Information Technology (MeitY) on October 22 proposed draft amendments to IT rules that would require social media companies to allow users to self-declare whether the content they upload is generated by AI. If users do not make such a declaration, platforms must proactively detect and label AI-generated content, the proposed amendment states.
Latest data from Meta's Transparency Report shows that in the first six months of 2025, the company restricted access to more than 16,000 items in India following removal requests from law enforcement agencies, including orders through the Ministry of Home Affairs' Indian Cyber Crime Coordination Center (I4C) and court orders. These requests included issues such as impersonation, hate speech, and harassment. I4C has developed the Sahyog portal, a centralized platform that automates the process of sending notifications to intermediaries by the appropriate government or its agencies under Section 79(3)(b) of the IT Act.
During the same period, Meta also restricted access to over 7,500 items under Section 69A of the IT Act, 2009 and the IT Blocking Rules, as directed by MeitY and the Ministry of Information and Broadcasting. These incidents related to public order, misinformation, and national security concerns.
Overall takedown numbers also increased. In 2024, Meta removed approximately 25,600 pieces of content across Facebook and Instagram in India. In 2023 and 2022, this was 13,930 and 1,800 pieces of content, respectively. By June, that number had already exceeded 28,000 in 2025, according to company disclosures.
