OpenAI, the creator of ChatGPT, said it thwarted the deceptive use of AI in a covert operation focused on the Indian elections within 24 hours, but that it did not lead to a significant increase in viewership. In a report on its website, OpenAI said Israeli political campaign management company STOIC also produced content about the Indian elections in parallel with content about the Gaza conflict.
Commenting on the report, Minister of State for Electronics and Technology Rajeev Chandrasekhar said, “It is quite clear and obvious that @BJP4India was the target of an influence operation.” OpenAI said it disrupted the activities of Israeli commercial company STOIC for its operations. It was only the activities that were disrupted, not the company. “In May, the network began generating India-focused comments criticizing the ruling BJP and praising the opposition Indian National Congress Party,” the company said. “In May, we disrupted several operations focused on the Indian elections within 24 hours of their inception.” OpenAI said it banned a set of Israeli-run accounts that were being used to generate and edit content for the influence operation across X, Facebook, Instagram, websites and YouTube. “The operation targeted audiences in Canada, the US and Israel with English and Hebrew content. In early May, it began targeting Indian audiences with English content.” It did not provide further details.
Commenting on the report, Minister of State for Electronics and Technology Rajeev Chandrasekhar said, “It is absolutely clear that @BJP4India has been and continues to be the target of influence operations, misinformation and foreign interference perpetrated by or on behalf of some political parties in India.”
“This is a very dangerous threat to our democracy. It is clear that this is being driven by vested interests both in India and abroad and needs to be thoroughly scrutinised, investigated and exposed. My view now is that these platforms could have exposed this much earlier and not just before the elections are over,” he added.
OpenAI said it is committed to developing safe and broadly beneficial AI. “Our investigation into the alleged covert IO is part of a broader strategy to achieve our goal of safe AI deployment.” OpenAI said it is committed to enforcing policies and increasing transparency to prevent misuse of AI-generated content. This is especially true when it comes to detecting and stopping covert IO, which seek to manipulate public opinion or affect political outcomes without revealing the true identities or intentions of the actors behind them.
“Over the past three months, we have disrupted five covert IOs that attempted to use our models to support deceptive activities across the internet. As of May 2024, these campaigns do not appear to have significantly increased audience engagement or reach as a result of our services,” the company said.
OpenAI explained its operations and said that it had disrupted the operations of an Israeli commercial company called STOIC, but only the operations were disrupted, not the company itself.
“We named the operation Zero Zeno, after the founder of Stoic philosophy. The people behind Zero Zeno used our model to generate articles and commentary that were posted across multiple platforms, including Instagram, Facebook, X, and websites associated with the operation,” the statement said.
The content posted by these various campaigns focused on a wide range of issues, including Russia's invasion of Ukraine, the Gaza conflict, Indian elections, Western politics, and criticism of the Chinese government by Chinese dissidents and foreign governments.
OpenAI said it is taking a multi-pronged approach to combating misuse of its platform, including monitoring and disrupting threat actors, including nation-state aligned groups and advanced persistent threats. “We are investing in technology and teams to identify and disrupt actors like those discussed here, and we are leveraging AI tools to combat misuse.” The company is working with others in the AI ecosystem to highlight potential misuse of AI and share its findings with the public.
This is a premium article available only to our subscribers. To read over 250 premium articles every month,
You've reached your limit for free articles. Support quality journalism.
You've reached your limit for free articles. Support quality journalism.
You have read {{data.cm.views}} from {{data.cm.maxViews}} Free articles.
This is the last free article.
