Google scrambles to manually remove weird AI answers in search

AI News


Social media has been abuzz with examples of Google's new AI Overview product making bizarre statements, such as telling people to put glue on pizza or encouraging them to eat stones. This confusing development has meant that every time various memes are posted, Google is scrambling to manually disable AI Overview for certain searches, which has resulted in users seeing many memes disappear shortly after they are posted on the social network.

This is an odd situation because Google has been testing AI Overviews for a year (the feature was released in beta as Search Generative Experience in May 2023), and CEO Sundar Pichai said the company has processed more than 1 billion queries in that time.

But Pichai also said that Google has reduced the cost of delivering AI answers by 80 percent over the same period “through hardware, engineering and technology innovation.” It seems like these kinds of optimizations may have been done too early, before the technology was ready.

“A company that was once known for shipping cutting-edge, high-quality products is now known for low-quality products that become memes,” said one of the AI ​​founders, who asked not to be named. The Verge.

Google continues to say that its AI Overview product primarily provides “high-quality information” to users. “Many of the examples we've seen are unusual queries, and we've also seen examples that have been doctored or cannot be reproduced,” Google spokesperson Megan Farnsworth said in an email. The VergeFarnsworth also confirmed that the company is “taking swift action to remove AI summaries for specific queries where appropriate under our content policies, and is using these examples to develop broader improvements to our systems, some of which we've already begun rolling out.”

“We're seeing a lot of changes in the way we think about the world,” said Gary Marcus, an AI expert and professor emeritus of neuroscience at New York University. The Verge Many AI companies are “selling the dream” that their technology will get from 80 percent to 100 percent accurate, Marcus says. Achieving the first 80 percent is relatively easy, since it requires approximating large amounts of human data, but the last 20 percent is much harder. In fact, Marcus believes the last 20 percent may be the hardest.

“You actually need some level of reasoning to determine: Is this plausible? Is this source legitimate? You have to do the sort of things that human fact-checkers do, but that might actually require artificial general intelligence,” Marcus says. And he and Meta's AI head Yann LeCun agree that the large-scale language models that power current AI systems, such as Google's Gemini and OpenAI's GPT-4, won't produce AGI.

It's a tough situation for Google: Bing made its AI push before Google did, with Satya Nadella's famous “we made them dance” line; OpenAI is reportedly building its own search engine; an upstart AI search startup is already worth $1 billion; and younger users looking for the best experience are switching to TikTok.. The company is clearly feeling competitive pressure, and it's this pressure that's causing it to muddy its AI releases. Marcus points out that in 2022, Meta released an AI system called Galactica, but it had to be removed shortly after release for, among other things, telling people to eat glass. Sounds familiar.

Google has big plans for AI Overviews. What's there today is just a small part of what the company announced last week. There are big ambitions here: multi-stage reasoning for complex queries, the ability to generate AI-curated result pages, video search in Google Lens, and more. But for now, the company's reputation depends on getting the basics right, and that's not looking good.

“[These models] “They essentially have no ability to check the validity of their own work and that's hurting the industry,” Marcus said.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *