AI and Disinformation: When Technology Is Both the Problem and the Solution
In short: Artificial intelligence has turned the production of false information into an industry. Deepfakes, synthetic content, automated disinformation factories: falsehoods have never been easier to produce. But AI is also our best tool for detecting them. Here's how this paradox is redefining our relationship with truth.
In February 2024, an employee at a major British engineering firm transferred 25 million dollars to fraudsters. Not because of a phishing email. Not because of a forged document. Because of a video call. All of his interlocutors, including the company's chief financial officer, were deepfakes generated by artificial intelligence. Nobody in the virtual room was real.
This case is not an isolated incident. It's the sign of a tipping point. AI has become the most powerful tool ever created for manufacturing disinformation. And at the same time, paradoxically, it's our best hope for fighting it.

AI as a Disinformation Machine
The numbers are staggering. According to an Ahrefs analysis covering nearly one million new web pages published in April 2025, 74.2% contained content detectable as AI-generated. On the deepfake front, we've gone from 500,000 files in 2023 to approximately 8 million in 2025 according to British government estimates. Deepfake fraud incidents increased by 257% between 2023 and 2024. And the first quarter of 2025 already surpassed the previous year's total.
UNESCO speaks of a "synthetic reality threshold": a point beyond which humans can no longer distinguish the real from the fabricated without technological assistance. And we are crossing that threshold. A 2025 iProov study revealed that only 0.1% of participants correctly identified all the real and fake content presented to them. The human detection rate for high-quality video deepfakes drops to 24.5%. In other words: our eyes and ears are no longer enough.
The Three Vectors of Falsehood
Text: Language models can produce articles, comments, and social media posts at an industrial scale. The production cost of a piece of false textual information has become virtually zero. Automated content farms flood platforms with misleading narratives, drowning verified information in an ocean of noise.
Image and video: Advances in image and video generators (Sora, Veo, and open-source tools) have democratized the creation of strikingly realistic synthetic media. As Siwei Lyu, director of the UB Media Forensic Lab, points out, 2025-era deepfakes feature coherent movements, stable identities, and sufficient quality to deceive non-expert viewers, especially in low-resolution video calls and on social media.
Voice: Voice cloning has crossed what experts call the "indistinguishability threshold." A few seconds of audio are enough to create a convincing clone, with natural intonation, pauses, and even breathing sounds. According to McAfee, 70% of people say they are not confident in their ability to distinguish a real voice from a cloned one.
The Geopolitical Weapon
The geopolitical dimension is equally alarming. In Romania, the presidential elections at the end of 2024 highlighted the complexity of the problem. Candidate Călin Georgescu, virtually unknown in polls a month before the vote, amassed 62 million views on TikTok in one week and won the first round. Romanian intelligence services attributed this meteoric rise to a coordinated campaign involving automated accounts, algorithmic amplification, and undeclared paid promotion. The second round was annulled by the Constitutional Court.
But the story doesn't end there. An investigation by the Romanian tax administration revealed that it was actually a Romanian political party, the National Liberal Party, that had funded at least part of the TikTok campaign. In February 2026, a report from the U.S. House Judiciary Committee cited internal TikTok documents indicating that the platform had found "no evidence" of a coordinated Russian campaign. Romanian President Nicușor Dan, however, contested this conclusion, noting that TikTok itself had identified and removed tens of thousands of fake accounts and hidden influence networks around the election, some linked to Sputnik. The European Commission called the American report's accusations "entirely unfounded."
What this affair shows is that information manipulation during elections cannot be reduced to a simple scenario. The sources of influence are multiple (state, partisan, commercial), platforms struggle to distinguish organic amplification from coordinated manipulation, and AI tools are making the boundary between the two increasingly blurry.
According to the Eurobarometer, 81% of European citizens consider disinformation and foreign interference as urgent problems, particularly during election periods. The Romanian affair shows why.
AI Turned Against Disinformation
Here lies the productive paradox of this situation. It is precisely because AI can produce disinformation at scale that only AI can verify it at the same scale. The same technologies fueling the problem are being turned against it.
Detecting Synthetic Content
Several technical approaches now allow the identification of AI-generated content. Forensic metadata analysis detects inconsistencies in information embedded in files (creation date, source software, missing or suspicious GPS coordinates). Frequency analysis spots artifacts invisible to the naked eye in image pixels or audio spectrograms. Machine learning models, trained on millions of examples of real and synthetic content, identify statistical signatures specific to AI generators: noise patterns, temporal micro-inconsistencies in videos, abnormal regularities in vocal prosody.
The AI detection tools market is growing at an annual rate of 28 to 42%. What our brains can no longer spot, machines still can. And the methods are diversifying: invisible watermarking embedded at generation, content provenance analysis via protocols like C2PA, and behavioral detection that identifies diffusion patterns typical of coordinated campaigns.
Automated Fact-Checking
Beyond deepfake detection, AI enables something more ambitious: automated verification of factual claims. Agentic AI systems (sets of specialized agents working in sequence) can extract claims from content, compare them against databases of reliable sources, evaluate the consistency of evidence, and produce a reliability score. All within seconds.
This approach doesn't replace human judgment. It amplifies it. It makes it possible to process a volume of information that no newsroom could manually verify, while providing complete traceability of the verification process.
Algorithmic Accuracy Nudges
Research from MIT Sloan has shown that simple "accuracy nudges" (reminders inviting users to think about the veracity of information before sharing it) significantly reduce the spread of false news. AI can integrate these nudges contextually and in a personalized way, by identifying content likely to be misleading before the user even shares it.
The Regulatory Framework Is Catching Up
Legislators are moving. Slowly, but they're moving.
The EU AI Act, whose deepfake labeling requirements became mandatory in August 2025, requires that AI-generated content be clearly identified as such. The Digital Services Act (DSA) imposes increased responsibility on platforms in the fight against disinformation. The strengthened Code of Practice on Disinformation of 2024 establishes a co-regulation framework.
In the United States, the TAKE IT DOWN Act, signed in May 2025, criminalizes the non-consensual creation and distribution of intimate images, including those generated by AI. In the United Kingdom, the Online Safety Act imposes a duty of care on platforms against illegal content, deepfakes included.
It's a start. But regulation alone won't be enough. By the time a law is passed, the technology has already evolved three times over. We also need tools that act in real time, as close as possible to users.
The Real Question: Who's Faster?
The real issue isn't whether AI can detect disinformation. It can. The question is whether detection can keep pace with production. It's a technological arms race where the advantage constantly shifts between attackers and defenders.
A few reasons for cautious optimism. First, detection has a structural advantage: generative models leave statistical fingerprints that detection models can learn to recognize. Second, the sheer volume of disinformation makes automated verification not just desirable but indispensable, which accelerates investment in these technologies. Third, the European regulatory framework creates transparency and labeling obligations that push toward the adoption of detection tools.
But the challenges remain immense. Deepfake quality is advancing faster than detector quality. Content is evolving toward real-time synthesis (interactive avatars rather than pre-recorded videos). And the democratization of generative tools means that anyone can produce sophisticated disinformation from their laptop.
What This Actually Changes
We've just seen the problem in all its complexity. AI producing falsehoods. AI detecting falsehoods. Regulation trying to keep up. The race between attackers and defenders.
But on a daily basis, for someone scrolling through their news feed in the morning with a coffee, all of this remains fairly abstract. The real question is: what do I do when I come across dubious content?
Three things.
Accept that our senses are no longer enough. If even experts only detect video deepfakes a quarter of the time, it's reasonable to admit that we can no longer rely solely on what we see and hear. That doesn't mean becoming paranoid. It means cultivating a methodical doubt, especially when facing content that triggers a strong emotional reaction.
Equip yourself. Just as we use antivirus software for our computers, verification tools are becoming necessary for our information consumption. Manual verification is simply no longer at the scale of the problem.
Demand transparency. Support initiatives that label AI-generated content, that make recommendation algorithms more transparent, and that invest in disinformation detection.
OpenTruth: AI on the Right Side
It is to provide a concrete answer to this paradox that OpenTruth was created. If AI is both the problem and the solution, then the solution must be put in everyone's hands. Not just newsrooms, researchers, or governments.
OpenTruth is a Swiss automated verification platform that does exactly that. You submit suspicious content (a claim, an article, a post), and within seconds, specialized AI agents analyze it: identification of the claim type, automatic search for reliable sources, transparent credibility score with all the references supporting it.
Where a human fact-checker spends several hours on a single verification, OpenTruth brings the process down to seconds. Without sacrificing rigor: every result comes with its sources, its reasoning, and its confidence level. The goal is not to replace human judgment, but to give it the means to move as fast as disinformation.
The choice of Switzerland is not incidental. When it comes to information verification, trust in the tool is as important as the quality of the analysis. Swiss neutrality, one of the world's strictest data protection laws (FADP), and a world-class technology ecosystem provide a framework that few other countries can offer.
Sources :
- UNESCO (2025). "Deepfakes and the crisis of knowing."
- Lyu, S. (2025). "Deepfakes leveled up in 2025 – here's what's coming next." The Conversation.
- Keepnet Labs (2025). "Deepfake Statistics & Trends 2025."
- Surfshark (2025). "Deepfake statistics 2025."
- EDMO (2025). "Propaganda and Disinformation: Lessons from 2024/25 Elections in Europe."
- Cazzamatta, R. & Sarısakaloğlu, A. (2025). "AI-Generated Misinformation: A Case Study on Emerging Trends in Fact-Checking Practices." SAGE Journals.
- MIT Sloan (2020). "MIT Sloan research about social media, misinformation, and elections."
This article is part of a series published by OpenTruth, the Swiss AI-powered automated verification platform. Our mission: making information verification accessible to everyone, in seconds.