Why does disinformation spread 6× faster than the truth ?
In short: A MIT study showed that false information spreads six times faster than verified facts on social media. It's not the bots' fault. It's ours. And if the problem is speed, then the solution must be too.
We've all shared a piece of false information without reading it. A shocking headline, a scary stat, a viral video, a tweet screenshot that seems too outrageous to be true. We share, we comment, we get outraged. And then sometimes, a few hours later, we find out it was false.
This reflex has been studied at scale by MIT researchers. In 2018, Soroush Vosoughi, Deb Roy, and Sinan Aral published in the journal Science the largest study ever conducted on the spread of false news online. Their dataset: approximately 126,000 diffusion cascades on Twitter, relayed by 3 million users and shared more than 4.5 million times between 2006 and 2017.
The result is stark. False news reaches its first 1,500 readers roughly six times faster than verified information. It is also 70% more likely to be retweeted than accurate information. And this holds across all categories: politics, health, science, economics, nothing is spared.

It's Not the Bots. It's Us.
This is perhaps the most unsettling finding of the study. When the researchers removed all automated accounts from their sample, the numbers barely changed. Bots amplify the noise, of course, but they are not the main driver of propagation.
Humans are.
Professor Sinan Aral puts it well: "Human decision-making influences the spread of false news more than we thought." And the most paradoxical part is that accounts sharing false information have, on average, fewer followers, are less active, and less often verified than those sharing truthful content. Disinformation spreads despite these disadvantages, not because of them.
We Share Falsehoods Because They Feel New
So why? Why do we collectively prefer to relay the false over the true?
The MIT researchers offer a fairly simple explanation: novelty. False information is perceived as more novel and more surprising than established facts. And on social media, sharing something new is a small boost in social status. It means being the person who "knew before everyone else."
The study also revealed a very different emotional profile. False information triggers surprise, fear, and disgust. True information tends to inspire sadness, anticipation, and trust. The first cocktail is exactly what drives impulsive sharing. We share before we verify, because emotion arrives faster than reflection.
Polarization Pours Fuel on the Fire
In 2021, a MIT team pushed the analysis further with a theoretical model. Two factors massively aggravate the problem: network hyperconnectivity and opinion polarization. The denser a network and the more divided its members, the more dubious content circulates.
The main mechanism, according to this model, is persuasion. People don't share false information because they necessarily believe it. They share it because they think it will convince others to join their side. A false but striking piece of information becomes a rhetorical weapon. It's not shared for its truthfulness, but for its effectiveness.
In Europe, the Numbers Are Staggering
European data confirms the scale of the problem. According to the Eurobarometer, 83% of European citizens consider false news a threat to democracy. 81% identify disinformation and foreign interference as urgent problems, especially during election periods. And 53% of Europeans say they encounter disinformation "very often or often" during a normal week.
Among young people, it's even more concerning. Teenagers aged 16 to 18 trust TikTok and Instagram more than any other news source (Youth Eurobarometer 2024). Yet 27% of TikTok users themselves admit they struggle to spot misleading content on the platform, according to the Reuters Institute.
We're dealing with a generation that overwhelmingly gets its news from platforms where it acknowledges not knowing how to tell true from false. This is a structural problem, not a generational one.
Verification Takes Too Long and Costs Too Much
Fact-checkers know this better than anyone. The International Fact-Checking Network (IFCN) published a report in 2025 painting a rather bleak picture: financial constraints, online harassment, eroding audiences. Fact-checkers are under pressure from all sides.
And the final blow came from Meta, which ended its fact-checking program in the United States in January 2025. Approximately 60% of verification organizations participated in that program. More than half derived a third of their revenue from it. Overnight, a significant part of the ecosystem was left vulnerable.
The underlying issue is a structural imbalance. Creating and spreading a piece of false information takes a few seconds. Verifying, contextualizing, and debunking it requires hours of skilled work, access to reliable databases, and sharp editorial expertise. This ratio is unsustainable.
What We Can Do About It
Research doesn't just describe the problem. It also identifies concrete levers.
The first is surprisingly simple. Professor David Rand of MIT Sloan showed that when people are asked to think about the accuracy of information before sharing it (an "accuracy nudge"), the quality of what they relay improves significantly. No professional fact-checker needed. Just a pause of a few seconds. And it works regardless of political beliefs.
The second lever is accessibility. The IFCN report shows that short video formats are the most effective way to reach new audiences (74% of fact-checkers confirm this). But beyond format, the real challenge is taking verification out of the closed circle of professionals. Today, automated fact-checking tools remain largely inaccessible to the general public.
The third is AI. If technology has enabled the production of disinformation on an industrial scale (nearly 50% of online content is now estimated to be AI-generated), it can also become the primary verification tool. Agentic AI systems, capable of automatically identifying, sourcing, and evaluating claims, can reduce verification time by 90% and costs by 70%.
Finally, the MIT's 2021 theoretical model suggests that we can act on the "costs" of sharing. Visible reliability labels, friction in the sharing process, credibility scores: all mechanisms that make sharing false information a little less free, and therefore a little less automatic.
The Real Problem Is Time
If we step back and look at everything we've just covered, a common thread emerges. Disinformation wins because it's fast. It exploits our cognitive biases, our appetite for novelty, our emotions. And it spreads within a digital ecosystem designed for engagement, not for truth.
While falsehoods spread, verification arrives too late. Opinions have already crystallized, the debate has already polarized. The correction exists, but nobody sees it.
The truth doesn't need to spread six times faster than lies. It just needs to arrive on time.
This is exactly the conviction that gave birth to OpenTruth.
OpenTruth: Restoring the Balance
OpenTruth was born from a simple observation. If the central problem of disinformation is the speed asymmetry between falsehoods and their verification, then the solution must tackle that asymmetry. Not in ten years. Not reserved for newsrooms and researchers. Now, and for everyone.
In practice, OpenTruth is a Swiss platform that allows anyone to submit a claim, an article, or suspicious content and receive a sourced analysis within seconds. The system identifies the type of claim, automatically searches for reliable sources, and provides a transparent credibility score along with all the references supporting it.
Where a human fact-checker spends several hours on a single verification, OpenTruth's AI agents bring that process down to seconds, without sacrificing rigor. Every result comes with its sources, its reasoning, and its confidence level. The idea is not to replace human judgment, but to give it the means to move fast.
The choice of Switzerland is not cosmetic. When it comes to verifying information, trust in the tool matters as much as the quality of the analysis. Swiss neutrality, one of the world's strictest data protection laws (FADP), and a world-class technology ecosystem provide a framework that few other countries can offer.
Professor Rand's "accuracy nudge," mentioned earlier, works. But it relies on individual goodwill. OpenTruth turns that pause into a concrete action: a doubt, a verification, a few seconds. No prior expertise needed.
Sources :
- Vosoughi, S., Roy, D., & Aral, S. (2018). "The spread of true and false news online." Science, 359(6380), 1146-1151. DOI: 10.1126/science.aap9559
- MIT News (2018). "Study: On Twitter, false news travels faster than true stories."
- MIT News (2021). "Systems scientists find clues to why false news snowballs on social media."
- Flash Eurobarometer on Fake News and Online Disinformation. Commission européenne.
- EDMO (2025). "Propaganda and Disinformation: Lessons from 2024/25 Elections in Europe."
- Poynter / IFCN (2025). "Financial pressures and harassment are among top concerns for fact-checkers worldwide."
- CEPR (2025). "Fact-checking reduces the circulation of misinformation."
This article is part of a series published by OpenTruth, the Swiss AI-powered automated verification platform. Our mission: making information verification accessible to everyone, in seconds.