A data-driven overview of worldwide fact-checked claims, analyzed by debunking organizations during this reporting period. This sample of 504 claims gives you an idea of what's out there: top claim type is old media new context, top method full ai generation, top subject military conflict, top intent political manipulation, average severity 3.5/5, with 153 AI-involved and 351 non-AI misinformation claims.
About 1 in 3 claims (30%) involves AI-generated or AI-manipulated content.
Claim volume is down 37% compared to the previous period (802 → 504).
AI-involved claims dropped 45% compared to the previous period.
Average severity increased from 3.0 to 3.5 — the misinformation is getting more dangerous.
18 claims rated severity 5 (critical) — these have potential for serious real-world harm including inciting violence or influencing elections.
59% of claims are rated severity 4 or 5, indicating a high concentration of dangerous misinformation.
The three most common claim types: old media new context (26%), fabricated text claim (21%), out of context media (13%).
Among AI-involved claims, full ai generation is the most common technique at 69% of AI cases (106 claims).
Most targeted regions: middle east (242), south asia (78), north america (40).
The primary motivation behind misinformation is political manipulation (66%), followed by disinformation campaign (10%).
Misinformation this month overwhelmingly targets fear (45% of claims) — a deliberate strategy to bypass critical thinking.
How severe is the misinformation being circulated? Level 1 is low-impact, level 5 is high-impact disinformation with potential for serious real-world harm.
What kind of misinformation is it? Click to filter claims.
Of the 153 AI-involved claims, which techniques were used? Click to filter. 351 claims used no AI.
Who or what is being targeted? Click to filter claims.
Why were these fakes created? Click to filter claims.
Where are these fakes aimed? Click to filter claims.
How were these fakes identified?
Where were these fakes distributed? 375 claims spread across multiple or unidentified platforms.
Sophistication of misinformation ranges from crude fabrication to highly polished, AI-enhanced content designed to evade detection.
Misinformation is designed to trigger specific emotional responses. Understanding the emotional vector reveals the strategy behind the deception.
This report aggregates fact-checked claims from 35 independent fact-checking organizations worldwide via the Google Fact Check Tools API. These organizations are signatories of the International Fact-Checking Network (IFCN) code of principles and follow transparent verification methodologies. Claims cover all types of misinformation — not just AI-generated images, but also false text claims, conspiracy theories, misleading statistics, out-of-context media, and more.
How claims are collected: The Google Fact Check API indexes claims from fact-checking organizations that publish ClaimReview structured data. We query the API with broad search terms to capture all available fact-checks from the reporting period. Each claim is then categorized using Gemini AI by type, method, subject, intent, geographic target, severity, sophistication, and emotional vector.
Every fact-checked claim from this period, ranked by severity. Click any tag to filter by category.
(AD) Do you want Henk van Ess to visit your company for a brilliant workshop?
Claims are sourced from the Google Fact Check Tools API, which indexes fact-check articles from IFCN-certified organizations worldwide. The API is query-based — there is no way to retrieve a complete list of all fact-checked claims. To maximize coverage, we run 65+ different search queries (broad terms like "fact check", "viral", "fake"; topic-specific terms like "election", "health", "deepfake"; regional terms like "India", "Africa", "Brazil"; and platform names like "Facebook", "TikTok", "WhatsApp"), each returning up to 100 results with pagination. This yields a large sample but is not a complete census of all fact-checked content published in the period.
When the same claim was reviewed by multiple fact-checking organizations, all reviewers are shown on that claim's card. Claims are merged by matching claim text, so a story checked by e.g. Snopes, PolitiFact, and AFP Fact Check appears once with all three linked. The number of claims in this report therefore represents unique stories, not unique articles.
Each claim is categorized by type, generation method, subject, intent, geographic target, severity (1–5), sophistication, and emotional vector using Gemini 2.0 Flash AI classification. Severity ratings reflect potential real-world impact (1 = quickly debunked satire, 5 = could incite violence or influence elections). This report covers all types of misinformation — AI-generated images, deepfakes, false text claims, conspiracy theories, misleading statistics, out-of-context media, fabricated quotes, fake screenshots, and more.
This monthly report was aggregated from 2 source reports: weekly_report_2026-03-01.json, weekly_report_2026-03-15.json. Claims appearing in multiple source reports are deduplicated so each unique story is counted once.
Because the Google Fact Check API requires search terms, claims that do not match any of our query terms will not appear. English-language results are prioritized (languageCode=en). The sample skews toward claims that use common misinformation-related vocabulary. Regional coverage depends on whether local fact-checkers publish in English and are indexed by Google. AI classification may occasionally miscategorize edge cases.