27th November 2025
BlogMisinformation tends to flourish in times of uncertainty, distrust, or instability, when simple, emotionally appealing narratives can feel more comforting than complex, nuanced truths. Of course, for those that are experiencing poor health, this emotional melting pot is exacerbated. In such environments, misleading claims can offer a sense of clarity, control, or even belonging, especially to those who feel alienated from mainstream systems of authority.
- MISINFORMATION noun
False or inaccurate information shared without harmful intent. It often arises from misunderstandings or outdated knowledge, for example, someone sharing an ineffective home remedy genuinely believing it to be beneficial.1 - MALINFORMATION noun
True information that is used maliciously to cause harm. This could include leaking private health data or selectively presenting facts out of context to mislead or stir up distrust.1 - DISINFORMATION noun
Deliberate and malicious false information created and shared to mislead or cause harm. In health, this might include fabricated studies or conspiracy theories designed to undermine trust in evidence-based medicine, foster societal division, gain political influence, or for commercial benefit.1
Once acting as a digital mirror of our real-life social networks, many platforms have evolved into algorithmically curated spaces, where content is tailored to maximise engagement. Equally, algorithms reward engagement rather than accuracy, allowing emotionally charged misinformation to spread more readily. Research shows the scale of the problem: over an 11-year period, one study found that it took about six times as long for the truth to reach 1,500 people compared to a falsehood on X.2
At the individual level, algorithms tend to promote content similar to what users have previously interacted with. Once someone engages with misinformation, they’re likely to see more of it, reinforcing false beliefs and deepening echo chambers. Misinformation can continue to influence people’s reasoning, even after they have been presented with the correct facts; this is known as the continued influence effect. This feedback loop, driven by algorithmic design and emotional resonance, and paired with a decrease in attention span potentially impacting critical thinking skills, makes misinformation not only more visible than ever, but also more potent and persuasive.3
Alarmingly, Point.1 data shows that over 80% of healthcare professionals now express concern about the impact of misinformation on their patients’ health.4 Patients report encountering false or misleading health claims across nearly every domain of daily life: from social media and podcasts to celebrities, religious figures, government, and even their friends and family.4 In fact, 67% of patients believe they have been exposed to misinformation about their condition in the past 12 months.4 Social media algorithms have moved misinformation from the shadows and fringes of the internet, into mainstream, globally popular platforms. One prominent example of this increasing presence and normalisation of misinformation in everyday life is the popular podcast “Diary of a CEO”.
Originally focused on business and entrepreneurship, its content has increasingly shifted toward health and wellness. This pivot has driven a surge in monthly YouTube views, from 9 million in 2023 to 15 million in 2024.5However, with this growth has come controversy. A 2024 BBC World Service investigation reviewed 15 health-related episodes of the podcast and found that each contained an average of 14 harmful health claims that directly contradicted established scientific evidence.5 These claims from guests included assertions that cancer could be treated with a ketogenic diet instead of proven medical treatments, anti-vaccine conspiracies, claims that eating gluten causes autism, and the suggestion that evidence-based medication is “toxic” for patients. When misinformation is pervasive in daily media, and propagated by seemingly “expert” guests, the path in navigating truth from falsehood, and helpful from harmful, gets increasingly complex.
Just as social media reshaped the information landscape, AI is poised for an even more profound transformation of healthcare. The potential benefits are endless and genuinely exciting. Imagine algorithms that detect subtle patterns of diabetic retinopathy from an eye scan years before a human ophthalmologist could, or AI models that analyse a patient’s unique genomic data to predict their response to a specific cancer therapy. These tools promise to accelerate drug discovery, personalise treatment plans, and streamline administrative tasks, freeing up HCPs to focus on patient care as it is so desperately wanted. In an ideal world, AI would serve as a powerful co-pilot for clinicians and an empowering, reliable resource for patients.
However, this same technology is fuelling a tsunami of misinformation. AI is a powerful accelerant, capable of generating convincing content at an industrial scale, mimicking the voice of authority, and amplifying falsehoods with unprecedented speed. AI models can produce full articles, social media posts, and video scripts in seconds, all tailored to exploit common health anxieties. These systems can adopt a confident, clinical tone, cite fabricated studies or impersonate medical experts with chilling accuracy.This AI-generated content is then fed into the same engagement-driven algorithms that govern social media, creating a hyper-efficient engine for spreading misleading and dangerous health narratives.
This new reality has created a complex and contradictory dynamic for patients. Point.1 data highlights this paradox: 46% of patients report using AI tools like chatbots to check health-related information in the last month, despite 44% stating they do not trust these tools as reliable sources.4 This behaviour reveals a critical vulnerability in the modern patient journey. The information vacuum created by systemic pressures and delayed access to care is now being filled by AI’s promise of instant resolution, creating a perilous new dynamic.
Healthcare professionals are witnessing the fallout firsthand; with 32% now believing that AI tools are a direct contributor to the spread of health misinformation.4 Unlike misinformation from a human source, which is limited by effort, AI’s ability to generate and adapt content is limitless, creating a unique and persistent challenge. The solution, therefore, is not to reject the technology, but to establish robust guardrails, such as verification systems, digital watermarking for AI-generated content related to healthcare, and a renewed focus on public health literacy to harness its power for good, while mitigating its capacity to harm and mislead patients.
Just as with social media, AI holds the power to revolutionise the way patients manage their health. Its potential to transform models of care is boundless. Yet, if left unchecked, it becomes a breeding ground for misinformation. It can cause profound harm if in the wrong hands, much like all powerful tools. So why has misinformation gripped our social media and AI-driven world so tightly? The answer is all too familiar: money and power
“OpenAI has done work on health benchmarking and includes guidelines in their terms and conditions, but that’s about as far as they go. Some companies are grounding their models on trusted health information, though I’m struggling to think of many doing this effectively for public consumption.”
Dr Keith Grimes, Founder & Partner, Curistica
Head to the Doctored Truths Report to continue reading and find out more.
Sources
- Konseye. Misinformation vs. Disinformation vs. Malinformation. Available at: https://www.konsyse.com/articles/misinformation-vs-disinformation-vs-malinformation/. Accessed: September 2025.
- Vosoughi S et al. Science 2018; 359(6380): 1146-1151.
- American Psychology Association. Why our attention spans are shrinking, with Gloria Mark, PhD. Available at: https://www.apa.org/news/podcasts/speaking-of-psychology/attention-spans. Accessed: September 2025.
- Point.1 – proprietary data platform. Data on file.
- BBC News – Jacqui Wakefield. Steven Bartlett sharing harmful health misinformation in Diary of CEO podcast. Available at: https://www.bbc.co.uk/news/articles/c4gpz163vg2o. Accessed: September 2025.