Google Faces Criticism for Downplaying Health Warnings in AI Advice

Google is under scrutiny for potentially jeopardizing user health by minimizing safety warnings associated with its AI-generated medical advice. The company’s AI Overviews, which appear prominently above search results, prompt users to seek professional medical guidance. However, the Guardian reports that these critical disclaimers often do not appear until users click on a button labeled “Show more,” limiting visibility when users first encounter medical information.

When users inquire about health-related topics, Google assures that its AI Overviews will indicate when professional advice is necessary. Yet, the initial presentation of medical advice lacks such disclaimers, and warnings only appear beneath additional information in smaller, lighter font. The disclaimer states: “This is for informational purposes only. For medical advice or a diagnosis, consult a professional. AI responses may include mistakes.”

In response to the Guardian findings, Google did not contest that these disclaimers are not immediately visible or that they are presented in a less prominent manner. A spokesperson maintained that AI Overviews encourage users to seek expert medical advice, often incorporating suggestions within the summary itself.

Experts in artificial intelligence and patient advocacy expressed concern over the findings. “The absence of disclaimers when users are initially served medical information creates several critical dangers,” warned Pat Pataranutaporn, an assistant professor at the Massachusetts Institute of Technology. He emphasized that AI models can generate inaccurate information, which poses significant risks in healthcare contexts.

Another academic, Gina Neff, a professor of responsible AI at Queen Mary University of London, criticized the design of AI Overviews, stating that they prioritize speed over accuracy. “This leads to mistakes in health information, which can be dangerous,” she added.

In January, a previous investigation by the Guardian highlighted the risks associated with misleading health information in Google’s AI Overviews. Neff noted that the findings illustrated the necessity for prominent disclaimers. “Google makes people click through before they find any disclaimer,” she said. Users reading quickly may mistakenly assume the information provided is reliable, despite the potential for significant errors.

Following the Guardian report, Google removed AI Overviews for some medical searches, but not all. Sonali Sharma, a researcher at Stanford University’s Centre for AI in Medicine and Imaging, highlighted the issue of visibility. She explained that the AI Overviews often appear at the top of search results, giving a false sense of reassurance that discourages users from seeking further information.

Sharma added, “For many people, that single summary creates a sense of confidence, which can lead to real-world harm. The AI Overviews can contain both accurate and inaccurate information, making it challenging for users to discern the truth unless they are already familiar with the topic.”

A Google spokesperson responded to the criticisms by asserting, “It’s inaccurate to suggest that AI Overviews don’t encourage people to seek professional medical advice. In addition to a clear disclaimer, AI Overviews frequently mention seeking medical attention directly within the overview itself, when appropriate.”

Patient advocacy groups are urging immediate action. Tom Bishop, head of patient information at Anthony Nolan, a blood cancer charity, stressed the importance of visible disclaimers. “Misinformation is a real problem, especially in health contexts,” he said. “That disclaimer needs to be much more prominent to encourage users to reflect on the information they receive and consider consulting their medical team.”

Bishop suggested that disclaimers should be positioned at the top of AI Overviews, easily visible and in the same font size as other information. “This is essential for ensuring that users can adequately assess the information’s relevance to their individual health situations,” he concluded.

As Google navigates these concerns, the balance between speed and accuracy in AI-generated health information remains a critical focus for both users and the healthcare community.