The 5 Types of False Information You Encounter Every Day Without Knowing It
In short: False information isn't limited to completely fabricated "fake news." Researcher Claire Wardle identified seven forms of information disorder. Here are five you probably encounter every week, without necessarily spotting them.
When we talk about disinformation, we immediately think of the big lie. The completely made-up article, the absurd rumor, the obvious conspiracy. But those cases, in the end, are fairly easy to spot. What's far more insidious is everything else.
The vast majority of false information we encounter daily isn't entirely false. It's distorted. Reframed. Taken out of context. Exaggerated just enough to be believable. And that's precisely what makes it so dangerous.
Claire Wardle, co-founder of First Draft and one of the most recognized researchers on the subject, proposed a classification that has become the standard reference. In a report co-authored with Hossein Derakhshan for the Council of Europe, she distinguishes three broad families (misinformation, disinformation, and malinformation) and seven concrete forms of problematic content. UNESCO adopted this classification in its media literacy program. The World Economic Forum identified disinformation in 2024 as the most severe short-term global risk.
Here are the five most common forms. The ones you probably encounter every week.

1. False Context: Real Content in the Wrong Place
This is arguably the most widespread form of disinformation. And the most devious, because the content is real. The photo actually exists. The quote is accurate. The video hasn't been doctored. The problem is that it's being shown to you in a completely different context from the original.
You've definitely seen this: a photo of a protest in one country presented as coming from another. A political statement cut and removed from its speech. A three-year-old video circulating as if it had just been filmed.
A recurring example: during the immigration debates in the United States, photos of Syrian refugees in Lesbos, Greece, from 2015, circulated on Facebook as if they showed the Central American migrant "caravan." Everything in the image was real. Everything about the context was false.
The reflex to adopt: verify the date and origin. A reverse image search can find the original publication in seconds. The question to ask yourself: "When and where was this content first published?"
2. Misleading Content: True Facts, False Framing
This one is particularly tricky. Each individual fact is accurate. It's the assembly that creates the deception. Certain data points are selected, others omitted, and an angle is chosen that shapes perception.
It takes many forms: a graph with a manipulated scale to exaggerate a trend. A real statistic taken out of context. An article citing a scientific report while (deliberately) leaving out the nuances. Cropped photos that completely change what you see.
It's hard to spot because you can't point to a specific lie. As Claire Wardle emphasizes, understanding how framing works (the choice of vocabulary, images, what's highlighted and what's left out) is essential for decoding information.
The reflex to adopt: when a piece of information triggers a strong emotional reaction, it's often the sign of heavy framing. Go find the original source. Read it in full. Compare with other outlets covering the same topic.
3. False Connection: When the Headline Tells a Different Story
You know the phenomenon, even if you don't necessarily have the word for it. It's "clickbait" in its most deceptive form. The headline promises something. The image suggests something. And the content tells a completely different story.
A sensational headline announcing a revelation the article doesn't contain. A photo associated with a topic it has nothing to do with. A caption implying something the text contradicts three paragraphs later.
The problem is our consumption habits. According to the Reuters Institute, a growing share of readers get their news solely from headlines and previews in their news feed, without ever opening the article. The headline creates an impression. The impression becomes what we remember. And what we remember, we share.
The reflex to adopt: read beyond the headline. Always. Especially when an article triggers a strong reaction. The headline may have told you a story the content doesn't confirm.
4. Impersonated Content: Credible Sources Imitated
This one plays on trust. A legitimate information source is imitated (a well-known media outlet, an institution, an expert) to lend credibility to false content.
In practice: a fake website whose URL resembles that of a known media outlet (the famous "abcnews.com.co" that imitated ABC News). Fake social media accounts impersonating journalists. Fabricated press releases bearing the logos of real organizations.
The 2025 German elections provided a textbook case. Operation "Storm-1516," attributed to Russia-linked actors, created more than 100 fake websites that spread deepfakes and fabricated stories targeting political figures. These sites mimicked credible news sources to amplify the impact of their content. This isn't craftsmanship. It's industrialization.
The reflex to adopt: check the exact URL of a website. Consult verified accounts on social media. And if an "explosive" piece of information comes from a single unknown source, take a step back.
5. Manipulated Content: Real Material, Altered to Deceive
The last category starts from authentic content (photos, videos, documents) and modifies it to create a false impression. With the rise of generative AI, this form has exploded in recent years.
A video whose speed is altered to change the perception of a gesture. A deepfake putting words in a politician's mouth. A retouched image where an element has been added or removed. A synthesized audio recording imitating someone's voice.
A concrete case: during a White House press conference, a video showing an exchange between a journalist and an intern was slowed down to give the impression of more forceful physical contact than actually occurred. The Washington Post placed the shared video and the original C-SPAN footage side by side, making the manipulation obvious.
The problem is that manipulation tools are becoming ever more accessible. According to a study published in Royal Society Open Science in 2025, AI-generated disinformation represents a growing challenge, because even traditional fact-checking interventions struggle to effectively counter its impact.
The reflex to adopt: be particularly vigilant with video and audio content that triggers strong emotion. Look for the original source. Deepfake detection tools exist and are improving, even if they remain imperfect.
What About the Other Two Types?
Claire Wardle's full classification includes two additional categories.
Satire and parody are not intended to cause harm, but can be taken at face value when they circulate outside their original context. A satirical post shared without its humor label quickly becomes "news."
Entirely fabricated content is the most extreme form: everything is invented, from A to Z, with the intent to deceive. It's what we picture when we say "fake news," but in reality, it's a minority of the problem. The most dangerous material is everything that sits between true and false.
Beyond "True or False"
The main contribution of Claire Wardle's framework is moving us past the binary question. True or false isn't enough. Disinformation operates on a spectrum. As she puts it: "It's complicated."
Most problematic content isn't a total fabrication. It's pieces of reality assembled, reframed, or taken out of context to produce a misleading impression. That's what makes it so hard to spot. And so effective.
According to the Eurobarometer, 53% of European citizens say they encounter disinformation "very often or often" in a given week. And when you consider the variety of forms it takes (false context, biased framing, misleading headlines, impersonation, manipulation), it's easy to understand why a simple "true or false" label is no longer enough.
What If You Could Verify as Fast as You Doubt?
We've just seen five forms of false information. Five different mechanisms, five ways to deceive. And for each one, the reflex is the same: ask the question, find the source, compare, verify. It's doable. But let's be honest: in the flow of a typical day, how often do we actually take the time to do it?
That's the whole paradox. We know what we should do. We almost never do it, because it takes too long, it's too complicated, or simply because we don't know where to start.
This is the exact problem OpenTruth was built to solve. The idea is simple: allow anyone to submit suspicious content and receive a sourced analysis within seconds. The system identifies what type of claim it is (and you've just seen why that matters), automatically searches for reliable sources, and provides a transparent credibility score with the references supporting it.
You don't need to know whether you're dealing with false context, misleading content, or manipulated content. OpenTruth does that sorting for you and gives you the elements to judge for yourself. Based in Switzerland, the platform benefits from Swiss neutrality and one of the world's strictest data protection laws.
Claire Wardle reminds us: "When you hit 'share,' you become responsible for that information." OpenTruth gives you the means to exercise that responsibility in seconds, without prior expertise.
Sources :
- Wardle, C. (2017). "Fake News. It's Complicated." First Draft.
- Wardle, C. & Derakhshan, H. (2017). Information Disorder: Toward an Interdisciplinary Framework for Research and Policy. Council of Europe.
- First Draft (2019). "Understanding Information Disorder."
- UNESCO. "Module 4: Media and Information Literacy Competencies to Tackle Misinformation, Disinformation and Hate Speech."
- Eurobarometer (2023). Survey on disinformation in the EU.
- EDMO (2025). "Propaganda and Disinformation: Lessons from 2024/25 Elections in Europe."
- Spearing, E.R. et al. (2025). "Countering AI-generated misinformation with pre-emptive source discreditation and debunking." Royal Society Open Science, 12(6).
- SciLine (2021). "Dr. Claire Wardle: The science of misinformation."
This article is part of a series published by OpenTruth, the Swiss AI-powered automated verification platform. Our mission: making information verification accessible to everyone, in seconds.