NBA superstar LeBron James became one of the first major celebrities to oppose the misuse of his portraits in AI-generated content. James' legal team recently issued a halt and assumed letter to Flickup, the company behind AI Image Generation Tool Interlink AI.
According to a report from 404 Media, Flickup revealed legal action to members of the Discord community in late June. The interlink AI tool hosted on the server allowed users to create videos generated for AI of famous NBA players such as James, Stephen Curry, and Nikola Jokic. While much of the video was harmless, some videos crossed the line into obstructive areas, like the prominent image of Los Angeles Laker holding his pregnant belly.
AI actors and deepfakes have not come to YouTube ads. They are already here.
One of the most widely viewed videos created with Interlink AI depicts Sean's “diddy” comb, produced by the AI in a prison setting, sexually assaulting curry, but James appears to be standing passively in the background. Only the video reportedly has accumulated over 6.2 million views on Instagram.
Masculine light speed
404 Media confirmed that James' legal team was behind the ceasefire and assumed letter at Jason Stack, founder of Flick Up. Stacks said within 30 minutes of receiving it, it decided to “remove all realistic people from Interlink AI's software.” Stacks also posted a video dealing with the situation, simply captioning it, “I'm so f**ked.”
LeBron James is one of the growing list of celebrities who are being used without consent to disrupt AI-generated content. Pop star Taylor Swift has been repeatedly targeted by deepfake porn, but Scarlett Johansson and Steve Harvey both publicly condemn the misuse of their image and denounce the expressed support for the law to curb it. However, James stands out as one of the first to take formal legal action against companies that enable this type of content through AI tools.
Several bills are currently moving forward through Congress to address the rise of unconsensual AI-generated content. The recently handed Take It Down Act criminalizes publications or threats to publish intimate images without consent, such as deepfakes and AI-generated porn. Two additional proposals have also been introduced: protection and integrity of content origin from the 2025 No Fakes Act and the 2025 edits and the Deepfaked Media Act.
The No Fakes Act focuses on preventing unauthorized reproduction of AI in human voices, but the latter seeks to protect the original work and implement transparency around the media in which the AI is generated.
topic
artificial intelligence
