Has Google already fixed the AI ​​answer problem?

AI For Business


  • A pair of extremely poor answers generated by Google's AI went viral two weeks ago.
  • Now that story seems to have disappeared. Have Google's AI-powered answers improved, or have they stopped showing us the answers?
  • Probably both.

Two weeks ago, many in tech and even outside of it were debating answers generated by Google's AI, which sometimes instructed people to eat rocks or make pizza out of glue.

This week, we've been seeing and hearing about discussions about Google's bad AI answers. (Thanks for chiming in, Defector.)

But I see and hear about it a lot less, and I haven't seen a social media post about bad AI answers going viral in a while.

So has Google already fixed the AI ​​answers they're calling “AI summaries”? Or has it started showing the AI ​​answers less frequently, making it less likely that users will find a bad answer?

When I contacted Google, Blog post published 1 week agoThe company explained why bad AI answers were generated, claiming there weren't many of them, and that it was limiting their use, such as “hard news topics where freshness and factuality are important.”

Google PR also released an updated statement: “AI Summaries are designed to provide value beyond existing features on search results pages and to appear for helpful queries, and they will continue to appear for many searches. We're continually improving when and how we show AI Summaries to make them as helpful as possible, including several technical updates we've made over the past few weeks to improve response quality.”

But here are two data points that suggest… something Happened.

First of all, people seem to have stopped complaining about it on social media.

According to data from social media monitoring firm Brandwatch, users of company X (which I still refer to as Twitter) started paying attention to Google's AI brief the day after Google's May 14 I/O event. Then, a week later, things started to move fast, presumably as people saw examples of the really bad answers Google had provided. (As Google points out, some of those bad answers were actually fake. Note the correction at the end of this New York Times article.)

Media not supported by AMP.
Tap to get the full mobile experience.

Of course, it's possible that Google is generating just as many “bad answers” as before, and X/Twitter users are moving on to other more appealing things.

But it's also very possible that they simply haven't seen as many AI answers. First, Google said it's already working on solving some of the issues, including limiting the “inclusion of satirical or humorous content” in answers and in some cases turning off AI answers.

Another argument in support of the “there's less to see” claim comes from search optimization company BrightEdge, which says it has been tracking Google's AI Overviews since Google began testing it last fall, primarily among users who signed up to try it through the experimental Google Labs.

At one point, BrightEdge founder Jim Yu said, some keywords were generating AI answers 84% ​​of the time. But by the time of Google I/O, when the company announced it was rolling out AI answers to most users, that number had dropped to about 30%. Then, within a week of the announcement, that number had dropped again, this time to about 11%. (A Google spokesperson disputes BrightEdge's methodology and claims its results are inaccurate; Google doesn't provide its own statistics.)

Bright Edge

None of this is conclusive, but for now, it looks like Google has weathered the worst storm of its own making.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *