The only way to stop AI from being “art” in 2026 is to make it uncool.

AI Video & Visuals


A hill you can die on: I don't consider content created entirely by AI image or video generators to be “art.”

This rule was created by me for me and is taking up a lot of my brain space in 2025. Over the past year, we've gone from clunky, hallucinatory AI videos to clips that are almost indistinguishable from real video. This year may seem like it's gone on forever, but the pace of advancement in AI video over the past seven months has been truly dizzying. The same applies to image generation. Incredibly, Google's Nano Banana and OpenAI's first image models have also been around for just a few months.

This year was a big step forward, but it's about more than just adding audio to video. Veo 3 proved that cinematic AI video is not an oxymoron. And the app that powers it and the second-generation model, Sora, have given us a terrifying glimpse into a future where your likeness becomes a game in the imagination of every internet weirdo. But if you can get past that nausea (I still can't), it's also the best AI video model I've tested, with undeniable technical prowess at avoiding common AI errors.

And this year, more than ever, we've heard from artists, creators, and copyright holders that generative AI models are being created and deployed irresponsibly. Disney and Warner Bros. have filed a highly worded copyright infringement lawsuit against Google and Midjourney, calling them a “bottomless pit of plagiarism.” Anthropic announced a $1.5 billion settlement with authors who accused them of copyright infringement. And because the energy demands of AI are especially high in video, AI companies are racing to build massive data centers across the United States, despite concerns from local communities and environmental experts.

I spend more time using these generative AI tools than most people. These companies tout themselves as “democratizing creation” or “making it easier than ever to create art.” That rhetoric has intensified this year, as big tech companies not often known for their creativity and compassion for creators seek to convince potential customers that they know their balls. Thanks to the technological improvements seen in the new 2025 model and its viral popularity, our online lives are being filled with AI at an alarming rate. What AI creates is definitely not art. period.

We expect to see even more creative AI in 2026. It feels like the flow will never stop. That's why it's more important than ever to clearly distinguish between AI-generated content and true human art. It will also be more important than ever to point out that so-called AI “art” is pathetic, boring, and unoriginal. While I'm still hopeful that we'll get a better AI label, we need to rethink how we approach creative AI and the content (and slop) it creates as it fills our online lives.


Don't miss our unbiased technical content and lab-based reviews. Add CNET As a preferred source on Google.


AI vs. Art

AI-generated content imitates human art. That's by specification. These creative AI models are designed and refined using large amounts of human-generated data. For image and video models, that data includes photos, designs, and social media posts. The more extensive a model's training data, the more capable it is. For example, you can ask ChatGPT to create an image in the style of Studio Ghibli (as many people did in March 2025). The model knew that movie studios could create a specific manga/anime aesthetic and apply that style to their own AI images.

Because of such processes, AI rarely creates anything new. In one of my favorite quotes about AI this year, filmmaker (and former Meta AI data trainer) Nora Garrett told reporters while promoting her film After the Hunt: “AI is being sold to us as if it were the future, but it is a regurgitation of our collective past, being resold as the future.”

She continued, “I think at the end of the day, there's always a human element that people want. I don't know if making things happen faster, cheaper, more optimally really helps the human spirit and human collectivism.”

(My runner-up quote of the year comes from Guillermo del Toro. When asked his stance on the use of AI, he replied, “I'd rather be dead.”)

That's not to say you can't create art by collaging past work, but creative AI models have different limitations than human creativity. AI fundamentally cannot connect with people the way art can. It's not designed to make us deeply reflect. In fact, there is growing evidence that we stop thinking critically when using AI. Great art makes us uncomfortable, shows us what we don't want to see, and connects us with humanity as a whole. AI is notoriously bad at that.

For a seasonal example, take the pas de deux from Pyotr Ilyich Tchaikovsky's The Nutcracker. If you've seen this show, you may remember that it ends with a duet between the Sugar Plum Fairy and her Cavalier. This is one of the most well-known dances, in part because of its emotional and iconic musical composition. Tchaikovsky famously wrote the 1892 ballet while grieving the death of his sister Alexandra, and its sad, melancholic influence can be heard in the music, especially in the pas de deux. The emotional heart of this ballet is so strong that it still touches people's hearts 133 years after it was first performed. The so-called AI music generator is never Please manage it.

Even legitimate uses of AI that don't claim to be art come with risks. We have seen an explosion in AI shoddy, low quality, trashy, plastic and seemingly pointless images and videos. That's inevitable with social media, but this year's rise in creative AI models has made the situation even worse. This slop doesn't pretend to be art, but it's so ubiquitous online that, as my colleague Abrar Al-Heti wrote earlier this year, social media is a sociopathic wasteland.

We cannot trust technology companies to stop or be lazy about the “art” of AI.

Tech companies revealed this year that generated images and video are now essential and critical to winning the AI ​​race. And it's a well-funded and competitive marathon, where any innovation can give each company the edge it needs to stay in business and retain users.

For this reason, we cannot rely on AI companies to stop the “art” of AI or cutting corners. Many companies have invested in ways to prevent deepfakes and other potentially illegal content, but we've already seen examples of how easily the rules of each system can be circumvented. AI detection technology is important, but it is not sophisticated enough to catch every instance of misinformation that AI generates.

If we want to stop the spread of AI “art”, we have to make it uncool.

AI Atlas

The only way to slow supply is to reduce demand. Generative AI is very popular and useful for certain applications like brainstorming and personalization, but it's hard to imagine it stopping completely. But we can be more thoughtful about how we use it. AI is not always the right tool for every project. Great creative work is often discovered in the process of doing it. Creative work is knowledge work, and replacing that intellectual and emotional work with AI just makes it more work.

We must demand better from ourselves and our creators. This movement against AI and AI tilt is already in full swing. The backlash against McDonald's and Coca-Cola's AI holiday ads was immediate. Artists who share their work online emphasize that their work is free of AI, while others profess to hate AI.

AI enthusiasts cannot be elevated to the level of professional creators. And we cannot allow professional creators and brands to give us poor AI instead of human-centered work. Certainly, technology companies cannot afford to view the decline in their AI as an unfortunate but inevitable consequence of innovation. We can and must do even better in 2026.





Source link