Generative AI companies face allegations of prioritizing profits over effective rightsholder-oriented guardrails. OpenAI responds

AI News


In 2025, BHRC researchers recorded numerous cases of generative AI harm in their database. One entry details OpenAI's decision to block Martin Luther King Jr.'s content on Sora after his family raised concerns that AI-generated representations misrepresented the civil rights leader and reinforced racial stereotypes. According to reports, women are now being threatened and harassed online by AI in a more intuitive way. They often combine real photos, synthetic audio, and fabricated videos to create immersive and traumatic death threats. In one case, a woman received an AI-generated image of herself hanging from a noose and one of herself screaming while on fire. Some women receive AI-generated threats depicting themselves wearing clothes they actually own. These cases amplify psychological harm and fear, normalize online discrimination, and ultimately have a chilling effect on participation in public debate.

A most recent 404 Media report claims that generated AI content is furthering the dehumanization of immigrant communities through the generation and algorithmic amplification of fabricated ICE raid videos. Tech companies, including those that monetize AI-generated content, are said to be profiting from this abuse by algorithmically rewarding cycles of fear, anger, and prejudice, prioritizing profits over effective guardrails. By prioritizing engagement over truth, platforms have the power to normalize cruelty and obscure the human suffering behind sensationalized images.

The social impact is significant. People struggle to separate fact from fiction, empathy and a shared sense of reality are lost, social cohesion begins to crumble, and accountability for discrimination and incitement to violence is lost somewhere in the thousands of lines of a company's terms of service. Meanwhile, communities targeted by AI harassment face real-world consequences, including prejudice, intimidation, psychological trauma, and sometimes real-life attacks. These examples are not isolated incidents, but rather indicators of systemic risks where generative AI intersects with existing inequalities, misaligned algorithmic incentives, and ineffective (or non-existent) regulation to address the human rights impacts of artificial intelligence tools.

According to the United Nations Guiding Principles, businesses have a responsibility to identify the risks associated with their business operations, and responses to allegations of harm must recognize both technical and social realities. Companies developing and profiting from AI-generated synthetic content must commit to robust cross-platform enforcement of non-discrimination and non-violence, including non-removable watermarks and C2PA metadata, and actively monitor for abuse beyond their own apps. They should work directly with other platforms to quickly (but responsibly) remove harmful and inflammatory content, provide detection tools that are accessible to all users, especially journalists and human rights defenders, and invest in effective grievance mechanisms for at-risk communities. Technology companies need to recognize the direct psychological and broader societal impacts of AI-generated disinformation and dehumanization, and ensure that their “responsible innovation” efforts extend to real-world enforcement.

As WITNESS Executive Director Sam Gregory explained, there are concrete steps companies can take to address the human rights risks and harms they are creating.

“1. ๐‰๐ฎ๐ฌ๐ญโ€ The concern is that societyโ€™s ability to trust all videos, not just specific scam videos, is being cumulatively undermined. As high-quality synthetic video becomes more widespread, we need to be clear about how we think about collective harm to visual truth and trust, not just individual cases of misinformation and disinformation.

2. ๐Ž๐ฎ๐ญ-๐จ๐Ÿ-๐๐ฅ๐š๐ญ๐Ÿ๐จ๐ซ๐ฆ๐‘๐ž๐š๐ฅ๐ข๐ญ๐ฒ: ๐–๐ก๐ž๐ซ๐ž ๐‚๐จ๐ง๐ญ๐žsec Safety measures designed for managed environments will no longer work; metadata will be lost and watermarks will be removed.

3. ๐๐ซ๐จ๐ฏ๐ž๐ง๐š๐ง๐œ๐ž & ๐–๐š๐ญ๐ž๐ซ๐ฆ๐š๐ซ๐ค๐ข๐ง๐ : ๐“๐ก๐ž C2PA metadata and watermarks are important but currently ineffective due to easy removal, inconsistent cross-platform implementation, and premature claims about authenticity. There is insufficient senior leadership investment and resources to make this work.

4. ๐ƒ๐ž๐ญ๐ž๐œ๐ญ๐ข๐จ๐ง: ๐๐จ๐ญ๐”๐ฌ๐š๐›๐ฅ๐ž ๐จ๐ง ๐‘๐ž๐š๐ฅ-๐–๐จ๐ซ๐ฅ๐ ๐…๐ซ๐จ๐ง๐ญ๐ฅ๐ข๐ง๐ž๐ฌ OpenAI does not support front-line journalists and civil society organizations who need practical and accessible detection tools that work in real-world verification contexts where resources are scarce.

5. ๐‹๐ข๐ค๐ž๐ง๐ž๐ฌ๐ฌ ๐๐ซ๐จ๐ญ๐ž๐œ๐ญ๐ข๐จ๐ง: ๐๐ž๐ฒ๐จ๐ง๐ ๐ˆ๐ง-๐€๐ฉ๐ฉ ๐‚๐จ๐ง๐ญ๐ซ๐จ๐ฅ๐ฌ In-app likeness controls are insufficient if there is no scalable way for the public to detect or counter abuse of their likeness across the open web.

6. Sora provides the media you need to navigate rapidly changing synthetic content. It was launched without preparing vulnerable communities around the world with the literacy and ability to verify. environment. โ€

On December 4, 2025, the Center for Business and Human Rights asked OpenAI to respond to allegations that its technology, Sora, is being used to generate “videos that exploit human suffering” and that the problem is that “Sora's watermark is incredibly easy to hide.” OpenAI responded as follows:

โ€œAI-generated videos are created and shared across a variety of tools, so combating deceptive content requires an ecosystem-wide effort. On the creation side, we take a multi-layered approach. Our usage policy prohibits deceptive or misleading uses, including watermarks and C2PA provenance metadata, and we maintain internal systems designed to determine whether a video was created by us, and we take action when we detect violations…

Click here to see OpenAI's full response. Meta did not respond to requests for comment from 404 Media.



Source link