The AI ​​content boom is eroding trust.

AI News


As artificial intelligence rapidly changes the way images and videos are created, Instagram CEO Adam Mosseri warns of a future where it's no longer easy to communicate what's authentic online.

In a series of year-end posts shared on Instagram and Threads, Mosseri reflected on the current state of photos, videos and authenticity on social media amid what he described as a growing flood of AI-generated content. At the heart of his message is a clear concern. That means the signals people once relied on to trust what they see online are starting to crumble.

Mosseri said one of the biggest risks Instagram faces is not just competition or changing formats, but the speed at which the world itself is evolving. He points out that major changes are already underway in the lead up to 2026. “Authenticity is becoming infinitely reproducible,” he wrote.

“Everything that was important to creators – authenticity, connection, the ability to have a voice that can't be faked – is now accessible to anyone with the right tools,” Mosseri said. The idea of ​​visual evidence is weakening as deepfakes rapidly advance, allowing AI to produce photos and videos that are increasingly indistinguishable from the media they captured.

Mosseri links this change to the early Internet's transfer of power from organizations to individuals, as the cost of distributing information fell to nearly zero. “Individuals, not publishers or brands, have established that there is a huge market for people's content,” he noted.

At the same time, trust in educational institutions is at an all-time low. As a result, people are increasingly turning to self-shot content produced by creators they trust and admire. While there are widespread complaints about what is often referred to as “AI slop,” Mosseri argued that not all AI content is of low quality. “There’s a lot of great AI content out there,” he said.

However, even high-quality AI images contain visible information.

“High-quality AI content also has a look,” Mosseri said, pointing to overly smooth skin and overly smooth visuals. He warned that it was temporary. “Things will change. There will be more realistic AI content.”

Rather than making creators irrelevant, Mosseri believes this change could make authenticity more valuable. As it becomes harder to discern what is true, the demand for trusted voices may increase. “The hurdle is changing from 'Can you make it?' 'Can you make something that only you can make?'” he wrote.

Mosseri also challenged long-held perceptions about Instagram itself. For many users, especially those over 25, the platform is still reminiscent of a feed of sleek, square photos. “That feed is dead,” he said.

He explained that people stopped sharing personal moments on public feeds a few years ago. Currently, the primary sharing method is direct messages. “Blurry photos and shaky videos documenting everyday experiences. Shoe shots and unflattering candid photos now dominate the way users document their lives.”

This raw aesthetic is increasingly permeating public content. In that context, Mosseri believes parts of the imaging and camera industry are optimized for the wrong look. “They're competing to make everyone look like a professional photographer in 2015,” he says.

In a world where AI can generate perfect images instantly, that professional look is the prize.

“Flattery images are cheap to produce and boring to consume,” Mosseri argued. Instead, imperfection became meaningful. “In a world where everything could be perfect, imperfection becomes a signal.”

Rawness is no longer just an aesthetic choice, he added. “Rawness is no longer just an aesthetic preference. It's evidence. It's defensive. It's a way of saying, 'This is real because it's imperfect.'

But Mosseri doesn't believe this approach is a permanent solution. He acknowledged that AI will eventually be able to generate any aesthetic, even imperfect ones, that look convincingly authentic. Then you need to change your focus again. “We will need to shift the focus to who is saying something rather than what is being said.”

Reflecting on the pace of change, Mosseri pointed out that for most of our lives, photographs and videos have been thought of as accurate records of real moments. “This is clearly no longer the case. It will take years to adapt,” he wrote.

He expects users to move from default belief to default skepticism. Rather than assuming that what they see is real, people increasingly wonder who is sharing something and why. “This would be unpleasant. We are genetically predisposed to trust our eyes,” he says.

Mosseri added that platforms like Instagram will continue to work to identify and label AI-generated content. But as AI improves, that task becomes more difficult. “It would be more practical to fingerprint real media than fake media,” he said, pointing to solutions such as cryptographic image signatures at the time of capture.

Ultimately, Mosseri argued that labels alone are not enough. “We need to uncover more context about the accounts that are sharing content so people can make informed decisions,” he wrote.

He believes that in a world of “infinite abundance and infinite doubt,” the outstanding creators are those who maintain credibility by being realistic, transparent, and consistent.



Source link