A data-driven overview of worldwide fact-checked claims, analyzed by debunking organizations during this reporting period. This sample of 386 claims gives you an idea of what's out there: top claim type is fabricated text claim, top method no ai involved, top subject political figures, top intent political manipulation, average severity 3.1/5, with 116 AI-involved and 270 non-AI misinformation claims.
About 1 in 3 claims (30%) involves AI-generated or AI-manipulated content.
Claim volume is down 16% compared to the previous period (460 → 386).
20 claims rated severity 5 (critical) — these have potential for serious real-world harm including inciting violence or influencing elections.
34% of claims are rated severity 4 or 5, indicating a high concentration of dangerous misinformation.
Fabricated Text Claim is the dominant form of misinformation at 31% (118 claims).
The three most common claim types: fabricated text claim (31%), ai generated image (16%), out of context media (12%).
Among AI-involved claims, full ai generation is the most common technique at 72% of AI cases (83 claims).
Most targeted regions: south asia (142), north america (63), southeast asia (41).
The primary motivation behind misinformation is political manipulation (50%), followed by engagement bait (13%).
Misinformation this bi-weekly period overwhelmingly targets outrage (45% of claims) — a deliberate strategy to bypass critical thinking.
How severe is the misinformation being circulated? Level 1 is low-impact, level 5 is high-impact disinformation with potential for serious real-world harm.
What kind of misinformation is it? Click to filter claims.
Of the 116 AI-involved claims, which techniques were used? Click to filter. 270 claims used no AI.
Who or what is being targeted? Click to filter claims.
Why were these fakes created? Click to filter claims.
Where are these fakes aimed? Click to filter claims.
How were these fakes identified?
Where were these fakes distributed? 286 claims spread across multiple or unidentified platforms.
Sophistication of misinformation ranges from crude fabrication to highly polished, AI-enhanced content designed to evade detection.
Misinformation is designed to trigger specific emotional responses. Understanding the emotional vector reveals the strategy behind the deception.
This report aggregates fact-checked claims from 35 independent fact-checking organizations worldwide via the Google Fact Check Tools API. These organizations are signatories of the International Fact-Checking Network (IFCN) code of principles and follow transparent verification methodologies. Claims cover all types of misinformation — not just AI-generated images, but also false text claims, conspiracy theories, misleading statistics, out-of-context media, and more.
How claims are collected: The Google Fact Check API indexes claims from fact-checking organizations that publish ClaimReview structured data. We query the API with broad search terms to capture all available fact-checks from the reporting period. Each claim is then categorized using Gemini AI by type, method, subject, intent, geographic target, severity, sophistication, and emotional vector.
Every fact-checked claim from this period, ranked by severity. Click any tag to filter by category.
(AD) Do you want Henk van Ess to visit your company for a brilliant workshop?
Claims are sourced from the Google Fact Check Tools API, which indexes fact-check articles from IFCN-certified organizations worldwide. The API is query-based — there is no way to retrieve a complete list of all fact-checked claims. To maximize coverage, we run 75 different search queries (broad terms like "fact check", "viral", "fake"; topic-specific terms like "election", "health", "deepfake"; regional terms like "India", "Africa", "Brazil"; and platform names like "Facebook", "TikTok", "WhatsApp"), each returning up to 100 results with pagination. This yields a large sample but is not a complete census of all fact-checked content published in the period.
When the same claim was reviewed by multiple fact-checking organizations, all reviewers are shown on that claim's card. Claims are merged by matching claim text, so a story checked by e.g. Snopes, PolitiFact, and AFP Fact Check appears once with all three linked. The number of claims in this report therefore represents unique stories, not unique articles.
Each claim is categorized by type, generation method, subject, intent, geographic target, severity (1–5), sophistication, and emotional vector using Gemini 2.0 Flash AI classification. Severity ratings reflect potential real-world impact (1 = quickly debunked satire, 5 = could incite violence or influence elections). This report covers all types of misinformation — AI-generated images, deepfakes, false text claims, conspiracy theories, misleading statistics, out-of-context media, fabricated quotes, fake screenshots, and more.
Because the Google Fact Check API requires search terms, claims that do not match any of our query terms will not appear. English-language results are prioritized (languageCode=en). The sample skews toward claims that use common misinformation-related vocabulary. Regional coverage depends on whether local fact-checkers publish in English and are indexed by Google. AI classification may occasionally miscategorize edge cases.