Google scales back AI in search to keep pizza from getting glued

AI For Business


Liz Reid at the Google I/O conference earlier this month.
Google

  • Google is scaling back AI-generated answers in search results after users noticed the error.
  • The AI ​​Overview feature was launched two weeks ago but has faced backlash over false and unreasonable responses.
  • Google is implementing changes to detect nonsensical queries and limit content from forums.

Google is backing away from using AI-generated answers in search results after several infamous errors, including one that told users to put glue in their pizza sauce.

Google launched its AI Summary feature two weeks ago, displaying AI-generated summaries of search results at the top of the page for users in the United States. In the past few days, users, including SEO experts, have noticed a reduction in summaries, suspecting that the tech giant is scaling them back a bit in response to criticism. The AI ​​feature cannot be turned off while using the search engine.

Google's head of search, Liz Reid, confirmed in a blog post on Thursday that the company is addressing some of these issues.

These changes were made following instances of AI overviews going haywire. Fake screenshots of the feature and its effects flooded the internet, including search responses claiming that Barack Obama was a Muslim president, that no country in Africa begins with the letter K, and that people should eat “more than one pebble a day.”

Google's new guardrails include detecting “nonsensical queries” where AI results shouldn't be shown, limiting satire and humorous content, and introducing restrictions on prompts where there isn't enough data on the topic to make the AI ​​results useful.

Google's own ads show that false summaries aren't limited to a few viral search terms: A demo video published two weeks ago showed the Summary feature giving incorrect advice on how to repair a film camera.

In his blog post, Read also said that Google restricts content in forums and social media that may contain misleading advice.

“While forums are often great for providing trusted, first-hand information, they can also lead to less-than-useful advice, like using glue to keep cheese on pizza,” Reid wrote in the post.

Reid writes: The company already had systems in place that blocked AI-generated news and health-related results — negative results encouraging people to smoke while pregnant or leave their dogs in the car — were “fake screenshots,” she said.

The list of changes is the latest example of a major tech company rolling out AI products only to reimpose restrictions after a flurry of chaos.

Earlier this year, Google AI's image generation capabilities Refusing to take photos of white peopleThe company was criticized for being too “woke” and creating historically inaccurate images, including images of Asian Nazis and black founding fathers. A few weeks later, Google executives apologized and suspended the feature.

On February 28, Axel Springer, the parent company of Business Insider, along with 31 other media groups, filed a $2.3 billion lawsuit against Google in a Dutch court, alleging damages caused by the company's advertising practices.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *