It wasn't until June that Jeremy Carrasco uploaded his first videos to TikTok and Instagram. In that short time, he has amassed more than 300,000 followers on each platform. No, that's not exactly Charli D'Amelio's number, but it certainly makes him one of the most famous figures for AI literacy on social media.
jeremy said The Verge He has always wanted to try becoming a YouTuber. Instead, he found himself behind the camera, working as a producer and director on multi-camera live streams. But he eventually decided to take the plunge after realizing that most of the conversation around generative AI was being driven by technology companies. “We need other people to come at it from a creator and producer perspective,” he said. He maintains a YouTube page, but it's on TikTok and Instagram that he finds his audience.
Originally the idea was to talk about how to use AI. “I named my page showtoolsai because I was actually pretty optimistic about AI and how it can be used ethically in video production.” But that idealism turned out to be short-lived.
One thing he quickly realized was that no one was really talking about even the basics of how to identify AI video. “There's a need for this…and I had all the knowledge I needed to do it,” he said. But he also knew that this wasn't the kind of conversation that could be started by today's AI influencers, and that “we need someone who comes from close to this creator space and understands it.”
He quickly found his niche, posting about the characteristics of AI videos such as blurry textures, wobbly eyes, and items that appear and disappear in the background. While Jeremy's main focus continues to be on AI literacy and identifying the issues Sora has generated, he has also begun to delve into the pitfalls and potential dangers posed by the increasing number and quality of AI-generated videos, especially for creators.
- Soft skin texture and “dreamy” atmosphere
- Moving and dancing texture “Sora Noise”
- Inconsistent background details
- Signs and documents have gibberish words written on them instead of actual words.
- My eyes are shaky
- eerily perfect teeth
- hurried speech patterns
- It would be a shame if it were true
After all, the creator economy is one to watch. And now people are competing with an endless stream of AI-generated data. Jeremy wants people to understand that “this doesn't have to be difficult.” Sora 2 is free, removes many of the barriers for people to create clips in bulk, and can generate audio that looks pretty convincing at first glance.
The goal here doesn't have to be so sinister. In some cases, the goal may simply be to increase views or leverage the TikTok Creator Fund. A 7 second AI clip of a cat doing something absurd isn't worth much on its own. However, Jeremy says that if they can be strung together into a one-minute compilation and get 5 million views, account holders could earn about $1,000 in profit. It may not sound like a big deal, but for people in developing countries, it can be an important source of income.
Of course, there are worse actors. According to Jeremy, things like the AI Chinese medicine account Yang Mun (or Yang Mugs on some sites) are very simple scams. In it, a vaguely offensive caricature depicting an Eastern-style healer espouses health and wellness advice that appears to be primarily aimed at a Western audience. He has over 1.5 million subscribers and earns money just from views on Instagram. But the real scam begins by luring viewers to a website to buy an $11 e-book. If an e-book exists (at least one person has contacted Jeremy to say they were unable to access the book), it is almost certainly entirely AI-generated, as is the video.
People like Maddie Quinn aren't just trying to scam people out of money, they're actively stealing other people's content and taking over their likenesses. These accounts typically shoot videos from female creators and replace real people with AI-generated avatars, or replace faces with AI avatars. In some cases, creators have their entire likeness stolen and fed through an AI generator that ends up on OnlyFans.
When asked if he thinks generative AI can be used ethically in the creator space at this point, Jeremy says, “Generally no.” However, “there will be a carve-out.” [for accessibility] And cultural considerations prevent me from saying no flatly,” he says.
Some companies, like Lionsgate, are trying to create ethical video generation models by training them entirely on their own libraries. But that wasn't enough data to produce anything usable. “The only way they can create AI video as a generative tool the way they're doing it now is by stealing massive amounts of people's data…I think that's fundamentally flawed and we need to reject it,” says Jeremy.
Unfortunately, the platform will only accelerate the collapse of the creator economy that has fueled its rise. Instagram, Facebook, TikTok, and YouTube have all but ignored the influx of AI slop and have not consistently enforced their own rules for labeling AI content. This makes it difficult for creators to cut out the noise and makes the platform less appealing to users.
To make matters worse, they are all building their own generative AI tools. “Being a creator is basically like running an advertising agency,” says Jeremy. Sponsorship deals are the main way creators make money, but AI has quickly found a place to flood them with ads (of very questionable quality). And as AI video takes over advertising, it will “ruin the entire creator economy.”
Meta, Amazon, and DirecTV have all dabbled in generative AI advertising services. Ultimately, “we're going to sell advertising services directly to clients,” Jeremy said. Some creators may be tempted to cash in on the AI bandwagon. And Jeremy says, “It's very reasonable to question whether this is actually a good business opportunity for creators, but I don't think it is.”
