From search results to single answers: making AI health summaries safer
Patient Advocacy Group (PAG) views on the accuracy of healthcare information via Google/search engine AI summaries is an ongoing and evolving discussion within our industry that deserves a greater spotlight.
Artificial intelligence has rapidly moved from an emerging technology to an everyday touchpoint in how people search for and process health information. Yet, much of the public conversations about AI often feels like a race between individuals and companies as to who can optimise the fastest and increase visibility in LLM results, rather than a collective effort to ensure the accuracy of information. In the healthcare arena, this singularity is risky and warrants greater discussion.
GEO: the summary of a thousand sources
Many in our industry are familiar with GEO (Generative Engine Optimisation) and the ways in which AI algorithms can be influenced to pull from certain information sources (earned media being a key one, see my colleague Siobhan’s thoughts on that here). AI generated summaries curate, compress, and interpret information on behalf of the user. Instead of receiving a list of links to explore, users now receive a single answer—an authoritative sounding summary shaped by an algorithm whose sources may or may not be reliable.
For individuals navigating both chronic and acute healthcare conditions, the search for information and answers can be tough and the temptation of an AI summary that does the leg work for you, extremely enticing when faced with an urgent need for information and advice. However, the distinction between information and misinformation in health matters enormously and as an industry that prides ourselves on making information more accessible and improving health literacy, it’s one that we can’t ignore with the rise of AI.
This discussion is evolving quickly and the launch of platforms like ChatGPT Health , is turning the way that we look for health information on its head. But is everyone going directly to AI for their health information now or is that reserved for the more digitally literate amongst us?
In January, ChatGPT reported 40 million of its weekly active users (WAU) globally are already prompting about healthcare daily, with 25% doing so every week. Among US adults, AI use for health is mainly practical: checking or exploring symptoms (55%), understanding medical terms (48%), and learning about treatment options (44%).
At the same time we have traditional search behaviours. A 2025 University of Pennsylvania survey found 71% of US adults seek health information via search engines. What’s new is trust: 63% felt that AI-generated health information was somewhat or very reliable.
The combination of growing use and growing trust is where risk emerges – we don’t have a fail-safe option here. While AI tools can increase access to health information, the evidence shows they aren’tconsistently reliable when it matters most. They sound confident and offer detailed explanations but can still get the urgency and nuance wrong.
The risk was clearly almost immediately following ChatGPT Health’s January launch. Within days, Mount Sinai researchers submitted a peer-reviewed evaluation to Nature Medicine. In a structured test (60 vignettes, 960 total responses), the system under triaged over half of physician-defined emergencies and showed inconsistent activation of crisis safeguards.
Millions of people are already using AI to decide whether something is “serious enough” to act on. Even a relatively small error rate becomes a real problem when people delay care, ignore red flags, or feel falsely reassured because an AI sounded calm and authoritative.
As healthcare communicators and marketers, we are well versed in checking the sources of information that an AI summary is pulling from. When it comes to health information, we understand that factors such as recency of research, peer-review, which health journal research is published in, the real-life patient experience and similarity of that person’s situation to our own are all key factors in determining the integrity of information.
But, we are not necessarily the norm.
We must cater to all levels of digital and health literacy. The blend of legitimate research with less credible material is not always clearly signposted within AI content and people’s understanding of outputs varies greatly.
Embedding patient experience to ensure AI information accuracy: Putting PAGS at the centre of this process
If our industry is serious about patient centricity, then AI accuracy and misinformation must become a shared territory that isn’t reserved for those with the technical expertise and commercial resources to influence it.
PAGs have long been trusted by patients for their lived experience, clarity and relevance. They play a role that can’t be replicated by machines and algorithms in identifying information gaps, misinformation “hotspots,” and the realities of patient needs. Greater collaboration between pharmaceutical organisations, PAGs, and not for profit partners, to ensure that authoritative patient centred content feeds into both traditional search ecosystems and AI driven summaries is therefore essential. If you work for a pharmaceutical company, is the accuracy of AI health summary information a conversation that you are having with your PAG partners? Is misinformation something that you are actively trying to tackle collaboratively? If not, why not?
Too often, the vital and unique perspective that PAGs can offer are bought to the table too late, when they should be foundational in shaping strategy and output from the get-go. We cannot resist the rise of AI and its role in summarising health information, so we must therefore work collaboratively to ensure that it is optimised responsibly.
Ultimately, improving the information patients receive through AI serves us all. When people are equipped with accurate, un-biased, compassionate and trustworthy information, they make better decisions, feel more supported, and navigate their health with greater confidence. And that is an outcome worth striving for, together.
Are you keen to discuss how your organisation can be more patient centric in its approach to AI, GEO and health misinformation? If so, please get in touch at smt_pre@publicislangland.com.



