Simply put
- A viral AI video replaces the creator’s face and body with a Stranger Things actor and has been viewed more than 14 million times.
- Researchers say full-body deepfakes remove visual cues previously used to detect face-only manipulation.
- Experts warn that the same tools could facilitate fraud, disinformation and other abuses as access expands.
A viral post featuring a video purportedly made with Kling AI’s 2.6 motion control took social media by storm this week. A clip by Brazilian content creator Eder Xavier shows him perfectly swapping his face and body with Stranger Things actors Millie Bobby Brown, David Harbour, and Finn Wolfhard.
The video spread widely across social platforms, with more than 14 million views on X, and additional versions were subsequently posted. The clip also caught the attention of techies, including a16z partner Justin Moore, who shared the video from Xavier’s Instagram account.
“We are not prepared for how quickly AI will change the production pipeline,” Moore wrote. “Some of the latest video models have an immediate impact on Hollywood. An infinite number of characters can be replaced at negligible cost.”
As image and video generation tools continue to improve, with new models such as Kling, Google’s Veo 3.1 and Nano Banana, FaceFusion, and OpenAI’s Sora 2 expanding access to high-quality synthetic media, researchers warn that the technology seen in the viral clip is likely to spread rapidly beyond the isolated demonstration.
slippery slope
Viewers were surprised by the quality of the body-swapping video, but experts warned it could definitely be used as a tool for identity fraud.
“The floodgates are open. It’s easier than ever to steal a person’s digital likeness, including their voice and face, and now a single image can bring them to life. No one is safe,” said Emmanuel Saliba, chief research officer at cybersecurity firm GetReal Security. Decrypt.
“We will continue to see organized fraud at all scales, from one-on-one social engineering to coordinated disinformation campaigns to direct attacks on important companies and institutions,” he said.
Saliba said the viral video featuring the Stranger Things cast shows how thin the guardrails against abuse are currently.
“For a few dollars, anyone can now use a single image to create a full-body video of a politician, celebrity, CEO, or individual,” she said. “There is no way to protect an individual’s digital likeness by default. There is no identity guarantee.”
For Yu Chen, a professor of electrical and computer engineering at Binghamton University, full-body character swapping poses new challenges beyond the facial-only manipulation used in previous deepfake tools.
“Full-body character replacement represents a significant evolution in synthetic media capabilities,” Chen said. decryption. “These systems must simultaneously handle pose estimation, skeletal tracking, clothing and texture transfer, and natural movement synthesis across the human form.”
In addition to Stranger Things, the creators also posted a video of Leonard DiCaprio’s body swapping from The Wolf of Wall Street.
Not ready.
AI has redefined deepfakes and character swapping.
And it’s very simple.
A wild example. Please bookmark this.
[🎞️JulianoMass on IG]pic.twitter.com/fYvrnZTGL3
— Minchoi (@minchoi) January 15, 2026
“Previous deepfake techniques primarily operated within a constrained operational space and focused on replacing facial regions while leaving the rest of the frame largely untouched,” Chen said. “Detection methods can exploit mismatched boundaries between the synthetic face and the original body, as well as temporal artifacts when head movements do not naturally match body movements.”
Chen added: “While concerns about financial fraud and identity fraud remain, there are several other abuse vectors that require attention.” “Non-consensual intimate images represent the most pressing vector of harm, as these tools lower the technological barrier to creating synthetic explicit content featuring real individuals.”
Other threats highlighted by Saliba and Cheng include political disinformation and corporate espionage, where fraudsters impersonate employees or CEOs, publish fabricated “leaked” clips, circumvent controls, and harvest credentials through attacks that “sustain suspicion long enough for a trusted person in the video to gain access inside a sensitive business.”
While it’s unclear how the studio and actors appearing in the videos will react, Chen said developers will play a key role in implementing safeguards because the clips rely on publicly available AI models.
Still, he said responsibility should be shared between platforms, policy makers and end users, as placing responsibility solely on developers is unfeasible and could hinder beneficial use.
As these tools become more widespread, Chen said, researchers should prioritize detection models that identify essential statistical features of synthetic content, rather than relying on easily stripped metadata.
“Platforms need to invest in both automated detection pipelines and human review capabilities, while developing clear escalation procedures for high-stakes content involving public figures or potential wrongdoers,” he said, adding that policymakers should focus on establishing clear accountability frameworks and mandating disclosure requirements.
“The rapid democratization of these capabilities means that response frameworks developed today will be tested at scale within months rather than years,” Chen said.
daily report meeting Newsletter
Start each day with the current top news stories, plus original features, podcasts, videos, and more.
