
The rapid proliferation of hyper-realistic videos produced by tools like OpenAI's Sora has fueled disinformation campaigns and raised alarms over the failure of digital watermarks and disclosure policies to prevent millions of users from mistaking synthetic content for reality.
The advent of powerful video generation tools, especially OpenAI Sorasparked an unprecedented surge of deceptive content across major social media platforms such as TikTok, X, YouTube, Facebook, and Instagram. In the two months since Sora was introduced, experts tracking disinformation have reported that the volume of sophisticated computer-generated video has exposed serious deficiencies in existing platform policies that require disclosure of content. As reported in the New York Times, this deluge signals a crisis of digital credibility, with many videos aimed at provoking political abuse or provoking foreign influence operations actively manipulating public perceptions.
Guardrails cannot contain technological breakthroughs
Social media companies maintain policies that broadly prohibit content that is intended to be deceptive and require disclosure of their use of artificial intelligence (AI). However, these guardrails have proven ineffective against rapid technological advances such as AI video generation. Deceptive videos range from harmless memes to content aimed at inciting social tensions, such as a video published in the New York Times and shown during the recent U.S. government shutdown targeting food stamp recipients.
The impact of this vulnerability is already clear. For example, FOX News mistakenly published an article that treated a similar fake video about food stamp abuse as genuine public outrage, but the article has since been deleted.
Researchers and human rights groups are now demanding that platforms take greater responsibility for moderating and labeling content. As reported in the New York Times, Sam Gregory, executive director of the human rights organization Witness, which focuses on technology threats, argued that companies “could do better by proactively seeking out AI-generated information and labeling it themselves.”
Watermarking and intentional circumvention challenges
AI tool developers such as OpenAI (Sora) and Google (Veo) have tried to embed safeguards. Both tools embed a visible watermark (Sora places a small label). invisible metadata This is proof of the origin of content published in The New York Times that can be read by computer programs. The aim is to allow the platform to automatically detect and flag synthetic videos.
However, malicious users can easily bypass these safeguards.
- Avoidance techniques: Some users simply ignore disclosure rules, while others use readily available tools to bypass disclosure rules. Manipulate videos and blur or remove identifying watermarks. new york times We found dozens of examples of Sora videos appearing on YouTube without automatic labels.
- Delayed labeling: Even when labels are applied, they often only appear after thousands or millions of people have already viewed the deceptive content.
- User vigilance failure: An analysis of comments on highly shared TikTok videos about food stamps revealed that most Two-thirds of commenters responded as if the video was realeven when subtle clues and small labels are present. Sam Gregory said: “Personal vigilance” The plan fails when the entire timeline requires close scrutiny and “bears no resemblance to how we interact with our own,” as published in the New York Times.
Disinformation, foreign influence, and financial incentives
The proliferation of realistic AI videos has proven to be a boon for disinformation efforts, fraud, and crime. foreign influence operations. For example, a Russian disinformation campaign exploited corruption scandals within Ukraine's political leadership and used shoddyly hidden Sora videos on TikTok and X to create fake videos of front-line soldiers crying, as published in the New York Times.
James P. Rubin and Darjan Vujica, former employees of the now-disbanded State Department, argued: Foreign affairs Advances in AI are accelerating efforts to undermine democracies. As published in the New York Times, Buhica argued that “barriers to using deepfakes as part of disinformation are crumbling, and once misinformation spreads, it is difficult to correct the record.”
Social media platforms have responded slowly and unevenly. X and TikTok declined to comment on the flood of AI fakes, and a spokesperson for Meta (owner of Facebook and Instagram) noted that it's difficult to label every video as technology evolves rapidly. Aron Yamin, CEO of CopyLeaks, said the platform is currently no financial incentive As reported in the New York Times, this short-term gain could ultimately hurt the platform's long-term content quality, but the aim is to limit the spread of AI videos as long as they generate clicks and traffic.
Looking ahead: The crisis of authenticity across the ecosystem
The current state of social media shows an institutional unpreparedness for the rapid evolution of generative AI, and the amount and sophistication of deceptive content will only increase. What the future demands is “Ecosystem-wide initiatives” As OpenAI states, it is focused not only on improving metadata and watermarking technology, but also on establishing rigorous industry standards and regulatory oversight that mandate responsible disclosure. Without a fundamental shift in priorities from engagement metrics to content truth, the public’s ability to trust the visual information they see on their screens will continue to be undermined, posing a serious threat to public debate and the integrity of our democracy.
Based on original article published by The New York Times. https://www.nytimes.com/2025/12/08/technology/ai-slop-sora-social-media.html
