All Media
/

Misinformation in the age of patient trust transference: How do we make AI health summaries safer?

Think back. Remember the days when you used to type a few symptoms or a condition into Google and be faced with hundreds of websites to look through? You got a list of links and did your own filtering. Maybe you went straight to a patient website or a chat forum; maybe you just asked the question the next time you met your healthcare professional (HCP).

2021 wasn’t exactly a lifetime ago, but it might as well be.

This January, ChatGPT announced that 40 million weekly active users are already making health-related prompts on a daily basis.1 At the same time, a 2025 University of Pennsylvania survey found 71% of US adults seek health information via search engines, which means AI summaries are now the first (and often, only) port of call for many patient queries.2

Not only is this a major change in how we find information, and what information we’re exposed to, it’s also a marker of significant trust transference.

The fact is, many of us view AI responses as good enough. In a recent survey, 63% of respondents felt that AI-generated health information was somewhat or very reliable,2 and 60% of us already don’t go any further than those neat, condensed responses.

When it comes to people navigating the stress, physical and emotional fatigue of chronic and acute healthcare conditions, this complacency is as understandable as it is dangerous.

While AI tools can increase access to health information in a heartbeat, the evidence shows that they don’t always get it right when it matters most.

When LLMs know enough to be dangerous

This risk was highlighted almost immediately after ChatGPTHealth’s January launch.

Within days, Mount Sinai researchers submitted a peer-reviewed evaluation to Nature Medicine.3 In a structured test (60 vignettes, 960 total responses), the system under-triaged over half of physician-defined emergencies and showed inconsistent activation of crisis safeguards.3

Other research suggests as much as 49.6% of responses to health queries to AI chatbots are “problematic”.4

If millions of people are already using these tools to decide what their symptoms point towards or whether something is “serious enough” to act on, even a relatively small error rate becomes a real problem. And this error rate isn’t small.

As healthcare communicators and marketers, we know all about factchecking, but we also know this isn’t the norm.

The reality is, by the time most patients visit their HCPs today, large language models (LLMs) have already started to shape what they believe and what they think they need.

Why patient advocacy groups (PAGs) must be at the centre of the solution

No algorithm has accountability to patients, but PAGs and pharmaceutical organisations do.

PAGs have earned patient trust by doing what LLMs can’t: demystifying the complex by reflecting lived experience, and providing a human voice that pharma companies aren’t always trusted to offer. That role is now more important than ever, and it belongs at the centre of how we respond to AI misinformation.

If our industry is serious about patient centricity – and I like to think it is – then accuracy and transparency cannot remain the preserve of those with technical expertise or commercial leverage. Instead, it must become a shared responsibility and an urgent one at that.

Greater collaboration between pharma and PAGs is essential to ensure that inaccuracy is challenged, and authoritative, patient-centred content feeds into both Google AI-driven summaries and other LLM responses.

Too often, the vital perspective that PAGs can offer is brought to the table too late, when they should be foundational in shaping strategy and output from the get-go. This must change.

As a first step, every pharma company that claims patient centricity as a core value should be able to answer two questions: what do LLMs currently say about the conditions affecting our patients, and how are we working with our PAG partners to improve it? If you can’t answer both, that core value is being undermined.

Are you keen to discuss how your organisation can be more patient centric in its approach to AI, GEO and health misinformation? If so, please get in touch at smt_pre@publicislangland.com.

Open AI. AI as a Healthcare Ally. Available at: https://cdn.openai.com/pdf/2cb29276-68cd-4ec6-a5f4-c01c5e7a36e9/OpenAI-AI-as-a-Healthcare-Ally-Jan-2026.pdf(last accessed May 2026).

Annenberg Public Policy Center. Available at: https://www.annenbergpublicpolicycenter.org/many-in-u-s-consider-ai-generated-health-information-useful-and-reliable/(last accessed May 2026).

Ramaswamy A, et al. Nat Med. 2026.doi: https://doi.org/10.1038/s41591-026-04297-7. [Epub ahead of print].

Tiller NB, et al. BMJ Open 2026;16(4):e112695.

Share this post
No items found.