Google has removed some of its artificial intelligence health summaries after a Guardian investigation found people were at risk of harm from false and misleading information.
The company says its AI Overview, which uses generative AI to provide a snapshot of important information about a topic or question, is “helpful” and “reliable.”
However, some of the summaries shown at the top of search results provide inaccurate health information, putting users at risk of harm.
In one case described by experts as “dangerous” and “alarming,” Google provided false information about a key liver function test that could potentially falsely fool people with severe liver disease into thinking they were healthy.
If you type in “What are the normal ranges for a liver blood test,” you will be presented with a bunch of numbers, with little context and no consideration of the patient's nationality, gender, ethnicity or age, the Guardian has found.
Experts say what Google's AI overview says is normal may be significantly different from what is actually considered normal. This summary may cause critically ill patients to mistakenly believe that their test results are normal and not bother to attend follow-up medical meetings.
As a result of the investigation, the company removed the AI summary for the search terms “What is the normal range for a liver blood test” and “What is the normal range for a liver function test.”
A Google spokesperson said: “We don't comment on individual deletions within search. If some context is missing from the AI overview, we will work on broader improvements and take action based on our policies as needed.”
Vanessa Hebditch, director of communications and policy at liver health charity British River Trust, said:
“However, we remain concerned that if the questions were asked differently, AI summaries could still be provided that could be misleading, and that other health information generated by AI could be inaccurate and confusing.”
The Guardian found that if you type a slightly modified original query into Google, such as “lft reference range” or “lft test reference range”, an AI summary will appear. That was a big concern, Hebditch said.
“Liver function tests (LFTs) are a collection of different blood tests. Understanding the results and what to do next is complex and requires more than just comparing a series of numbers.
“However, because the AI overview shows the list of tests in bold, it is very easy for readers to miss that these numbers may not be the right numbers for the test.
“Furthermore, the AI summary does not warn that these tests will give normal results even if you have severe liver disease and require further treatment. This false sense of security can be very harmful.”
Google, which has a 91% share of the global search engine market, said it was considering the new example provided by the Guardian newspaper.
“Our bigger concern about all this is that Google is selective about single search results, and Google can block AI overviews for that, but it's not addressing the larger issue of AI overviews for health,” Hebditch said.
Sue Farrington, president of the Patient Information Forum, which disseminates evidence-based health information to patients, the public and health professionals, welcomed the removal of the summary but said she still had concerns.
“While this is a good result, it's only a necessary first step to maintaining trust in Google's health-related search results. There are still too many instances where Google AI Overview provides inaccurate health information to people.”
Farrington said millions of adults around the world already struggle to access reliable health information. “That’s why it’s so important that we guide people to robust, researched health information and care from trusted healthcare providers.”
The AI overview still pops up for other examples that the Guardian first highlighted to Google. It includes a summary of information about cancer and mental health that experts say is “completely wrong” and “really dangerous.”
When asked why these AI summaries weren't also removed, Google said they link to well-known, trusted sources and notify people when it's important to seek professional advice.
A spokesperson said: “Our team of internal clinicians reviewed what was shared and found that in many cases the information was not inaccurate and was supported by a high-quality website.”
Victor Tangerman, senior editor at technology website Futurism, said the Guardian's findings showed that Google had work to do “to ensure its AI tools don't spread dangerous health misinformation.”
quick guide
Contact Andrew Gregory about this story
show
If you have something to share about this story, please contact Andrew using one of the following methods:
The Guardian app has a tool to submit story tips. Messages are end-to-end encrypted and hidden within the daily activities performed by all Guardian mobile apps. This prevents observers from knowing that you are communicating with us, much less what you are saying.
If you don't already have the Guardian app, download it (iOS/Android) and go to the menu. Select “Secure Messaging.”
Email (not secure)
If you do not require a high level of security or confidentiality, you can email andrew.gregory@theguardian.com.
SecureDrop and other secure methods
If you can safely use the Tor network without being monitored or monitored, you can send messages and documents to Guardians through the SecureDrop platform.
Finally, the guide at theguardian.com/tips lists several ways to contact us safely and explains the pros and cons of each.
Google said the AI summary will only appear for queries for which there is high confidence in the quality of the response. The company added that it constantly measures and reviews the quality of summaries across different categories of information.
In an article for Search Engine Journal, senior writer Matt Southern says, “AI summaries appear above the ranked results. When health is the topic, errors are more important.”
