X will temporarily prohibit creators from monetizing their content through the Creator Revenue Sharing Program if they share AI-generated videos of armed conflict without clearly disclosing that they were created with AI. The move came three days after the United States and Israel launched attacks against Iran, sparking a series of violent reactions.
Nikita Beer, head of product at X, made the announcement in a post on X on Tuesday morning, writing, “During wartime, it’s important that people have access to authentic information on the ground.” He added that AI platforms have lowered the bar for creating “content that can potentially mislead people.”
Today, we are revising our Creator Revenue Sharing Policy to maintain the authenticity of content on Timeline and prevent program tampering.
During times of war, it is important that people have access to authentic information on the ground. With today’s AI technology…
— Nikita Bier (@nikitabier) March 3, 2026
Creators who post AI-generated videos of armed conflict without proper disclosure will be suspended from creator revenue sharing for 90 days, Beer said. Further violations will result in permanent suspension of the program.
In response to one user, Beer explained that creators should click on the menu and select “Add Content Disclosure,” where there is a “Made with AI” label option.
Beer added that X can check whether a video was generated by AI using available metadata embedded by the AI system, in conjunction with Community Notes, X’s crowdsourced fact-checking tool.
X debuted its revenue sharing initiative in mid-2023. Through this program, creators who subscribe to X Premium or verified organizations with at least 5 million organic impressions in the past three months and at least 500 verified followers are eligible to cash out their content. These users can earn money based on their engagement. This is a departure in 2024 from previous rules that determined payments based on ad impressions.
This policy update comes two days after a Wired investigation found that X was drowning in misinformation about the Iran conflict. Various videos from months or even years ago were presented as new, some posts contained outright disinformation about specific attacks, and AI-generated images and videos spread like wildfire, even being shared by Iranian newspaper Tehran Times.
X has also recently suffered from its own issues related to AI-generated content. From late December to early January, the site was flooded with sexual deepfakes without the consent of real users after users encouraged X’s built-in Grok AI to generate images. Although the platform eventually adjusted its rules, the changes were far from comprehensive, as users can still create such content using a variety of Grok interfaces.
