TikTok withdraws testing of problematic AI feature

AI For Business


Did TikTok just have a “nori pizza” moment?

The company pulled back a new artificial intelligence feature it was testing after it ran into problems, adding wildly inaccurate AI-generated text summaries to videos of users including Charli D’Amelio, Shakira, and Saturday Night Live.

These “AI Overviews” are designed to provide additional context to the video, recommend products similar to what’s on screen, and generally explain what’s going on. The tool worked well for summarizing some posts, but for others it caused more hallucinations than Brian Johnson did during his livestream mushroom trip.

Below is a chaotic example of the AI ​​overviews this reporter has seen on apps over the past week.

  • The AI ​​feature described the video of Charli D’Amelio sitting alone in front of a white wall and speaking directly into the camera as “a collection of different blueberries with different toppings.”
  • A dog trainer’s post explaining why dogs kick their legs after going to the bathroom was described as a “fascinating display of intricate origami art, meticulously folded from a single sheet.”
  • According to the AI ​​summary, Shakira’s video promoting the release of her new song was a “repetitive sequence of several different blue shapes appearing and moving on screen.”
  • A viral post by a user named Victoria about her heartbreak was described as a “mesmerizing close-up of a tiny hand tracing intricate patterns over and over a smooth surface”.
  • A video promoting Olivia Rodrigo’s upcoming SNL appearance was called “a person’s face gradually replaced by a random, meaningless string of letters and numbers.”

One Reddit user wrote that the feature is like watching a video and then “separately opening another tab and using a random text generator to create the caption.”

Now, TikTok is withdrawing the test after receiving feedback from users, a company spokesperson confirmed to Business Insider. They said the AI ​​Overview feature has been updated to focus on identifying products in videos rather than describing the full content of the video.

The tool has been tested for several months and was available to limited users in the U.S. and several other markets, a spokesperson said, calling it an experiment.

TikTok declined to say which model it used to power AI Overview, but the feature’s in-app description says it relies on either TikTok’s own AI technology or third-party products.

Seeing the hallucinatory AI summaries pop up in my feed this week felt like a throwback to the days when ChatGPT and other early AI products were making things on a regular basis.

When Google released a version of AI Overview in 2024, the feature confidently declared that, for example, a dog played in the NHL. He asked his colleague Katie Notopoulos to add glue to the pizza to keep the cheese from sliding off.

While AI tools have gotten better at answering some questions (a recent analysis by AI company Oumi found that Google’s AI summary was accurate about 90% of the time), it was strangely reassuring to know that the technology that doomsayers predict will wipe out white-collar jobs and take over many aspects of our lives can still fail in stupid and surprising ways.