Google explains strange AI Overview results

AI News


'Data void', 'information gap': Google explains strange AI search results

Google has announced more than a dozen technical improvements to its AI.

A week after a series of screenshots showing Google's artificial intelligence search tool, AI Overviews, giving inaccurate answers were circulated on social media, Google issued an explanation, citing “missing data” and “information gaps” as the cause of the blunder.

A few weeks ago, Google rolled out an experimental AI search feature in the US, but it quickly came under heavy criticism after people shared on social media some bizarre responses from the search tool, such as telling people to eat rocks or mix pizza cheese with glue.

In a blog post, Google acknowledged that it “did indeed see strange, inaccurate, or unhelpful AI summaries,” but denied that it “never saw” such AI summaries for dangerous answers on topics like leaving dogs in the car or smoking while pregnant. Google also pointed to a number of fake screenshots being shared online, which it called “obvious” and “absurd.”

The company said it had seen “new, nonsensical searches that appear to be designed to deliver false results,” adding that one area it needed to improve was interpreting gibberish queries and satirical content.

Google cited the question in the viral screenshot, “How many rocks should I eat?”, as an example, saying that virtually no one had asked that question before the screenshot went viral. There wasn't much quality web content online that seriously explored the question, creating a “data void” or “information gap,” Google said. When asked why its search tool gave a strange response to this particular query, Google said, “There was satirical content on the topic that also happened to be republished on a geological software provider's website. So when someone typed that question into a search, they saw an AI summary that dutifully linked to one of the few websites that had addressed the question.”

In the blog post, Liz Reid, vice president and head of Google Search, also explains how AI Overview works and how it differs from chatbots and other LLM products. She says that AI Overview is “powered by customized language models and integrated with our core web ranking system, and is designed to perform traditional 'search' tasks, like identifying relevant, high-quality results from Google's index.” So AI Overview not only provides text output, but also related links that support the results, allowing users to explore further.

“This means that AI Overviews generally doesn't 'hallucinate' or fudge facts like other LLM products,” she said.

According to Google, when AI Overviews makes a wrong decision, it could be because it “misinterpreted the query, misinterpreted the nuances of language on the web, or there isn't much good information available.”

Google says that after identifying the pattern of mistakes, it made more than a dozen technical improvements, including:

  • Google has built better mechanisms to detect nonsensical queries and has restricted the placement of satirical and humorous content.
  • Google has updated its systems to limit the use of user-generated content in responses that may provide misleading advice.
  • Google added trigger limits for queries where the AI ​​summary proved less useful.
  • You won't see AI summaries of hard news topics, where “freshness and factuality” are important.

Apart from these improvements, Google said it found content policy violations in “fewer than one in seven million unique queries” where AI Overviews appeared and took action against them.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *