Google fixes AI-generated search summaries after outlandish answers go viral – Business News

AI For Business


Google said on Friday it had made “a dozen technical improvements” to its artificial intelligence systems after it discovered its revamped search engine was spitting out misleading information.

The tech company announced a revamp of its search engine in mid-May that would frequently display AI-generated summaries at the top of search results, and soon after, social media users began sharing screenshots of the most outlandish answers.

Google has largely defended its AI-generated summaries, saying they're generally accurate and thoroughly tested in advance. But Liz Reid, head of Google's search business, acknowledged in a blog post on Friday that “we've certainly seen strange, inaccurate, or unhelpful AI-generated summaries.”

Many of the examples were absurd, but some were dangerous or harmful falsehoods.

When the Associated Press asked Google last week which wild mushrooms people should eat, the company returned a lengthy, AI-generated summary that was mostly correct technically but “missing a lot of information that could make you sick or even kill you,” said Mary Catherine Aime, a professor of mycology and botany at Purdue University who reviewed Google's responses to AP's questions.

For example, she said, the information about a mushroom called a puffball was “mostly accurate,” but the Google summary emphasized looking for white, tough, fleshy mushrooms, which also include many potentially deadly puffball imitations.

In another widely shared example, when an AI researcher asked Google how many Muslims have served as US presidents, Google confidently responded with a long-debunked conspiracy theory: “The US has had one Muslim president, Barack Hussein Obama.”

Google said last week that Obama's mistake violated its content policies and made an immediate correction to prevent a similar mistake from happening again.

Reid said Friday that in other cases, the company has been working on broader improvements, such as mechanisms to detect “nonsense questions” (such as 'How many rocks should I eat?') that can't be answered by AI summaries.

AI systems have also been updated to limit the use of user-generated content (such as social media posts from Reddit) that may provide misleading advice. In one widely shared example, a Google AI overview last week cited a satirical comment from Reddit that suggested using glue to stick cheese to pizza.

Reid said the company has added more “trigger limits” to improve the quality of answers to certain questions, such as those about health.

Google summaries are designed to help users get authoritative answers to the information they are looking for as quickly as possible, without having to click through a ranked list of website links.

But some AI experts have long warned against Google ceding search results to AI-generated answers that could perpetuate bias and misinformation, and endanger people seeking help in emergencies. AI systems, known as large-scale language models, work by predicting what words will best answer the question you ask, based on the data they've learned. AI systems have a tendency to fabricate facts, a problem that has been widely studied as hallucinations.

In a blog post on Friday, Reid argued that Google's AI-generated summaries “generally don't 'hallucinate' or fudge facts like other large-scale language model-based products” because they are more tightly integrated with Google's traditional search engine in that they only show what's backed up by the top web results.

“When AI Overview gets it wrong, it's usually for other reasons: it misinterprets the query, it misinterprets the nuances of language on the web, or it just doesn't have much good information to draw on,” she wrote.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *