Yearly Fact Check Intelligence Report

2026-01-01 — 2026-04-21
111-day period

A data-driven overview of worldwide fact-checked claims, analyzed by debunking organizations during this reporting period. This sample of 2189 claims gives you an idea of what's out there: top claim type is fabricated text claim, top method full ai generation, top subject political figures, top intent political manipulation, average severity 3.1/5, with 683 AI-involved and 1506 non-AI misinformation claims.

What the data tells us

About 1 in 3 claims (31%) involves AI-generated or AI-manipulated content.

76 claims rated severity 5 (critical) — these have potential for serious real-world harm including inciting violence or influencing elections.

39% of claims are rated severity 4 or 5, indicating a high concentration of dangerous misinformation.

The three most common claim types: fabricated text claim (26%), old media new context (14%), out of context media (14%).

Among AI-involved claims, full ai generation is the most common technique at 63% of AI cases (429 claims).

Most targeted regions: south asia (648), north america (333), middle east (297).

The primary motivation behind misinformation is political manipulation (53%), followed by engagement bait (10%).

Misinformation this year overwhelmingly targets outrage (42% of claims) — a deliberate strategy to bypass critical thinking.

Peak month: 2026-01 with 883 claims.

Total Claims Analyzed
2189
across 111 days
Average Severity
3.1
out of 5.0
AI-Involved
683
claims using AI tools
Non-AI Misinfo
1506
traditional misinformation
Top Claim Type
Fabricated Text Claim
most common category
Top Method
Full Ai Generation
most common technique
Top Subject
Political Figures
most targeted topic
Top Intent
Political Manipulation
most common motivation

Severity Distribution

How severe is the misinformation being circulated? Level 1 is low-impact, level 5 is high-impact disinformation with potential for serious real-world harm.

79
claims
Level 1
Low Impact
335
claims
Level 2
Minor
866
claims
Level 3
Moderate
768
claims
Level 4
Serious
76
claims
Level 5
Critical

Month-by-Month Breakdown

How misinformation evolved over the year, month by month.

2026-01
883
claims
252 AI / sev 2.98
2026-02
802
claims
278 AI / sev 3.0
2026-03
504
claims
153 AI / sev 3.5
2026-04
0
claims

Statistical Analysis

By Claim Type

What kind of misinformation is it? Click to filter claims.

Fabricated Text Claim 567
Old Media New Context 309
Out Of Context Media 307
Ai Generated Image 245
Ai Generated Video 181
Manipulated Image 151
Deepfake Video 77
Misleading Statistic 74
Satire As News 58
Misattributed Quote 41
Fake Screenshot 40
Conspiracy Theory 24
Scam Fraud 3
Miscaptioned 2
Misidentified Image 1

By AI Generation Method

Of the 683 AI-involved claims, which techniques were used? Click to filter. 1506 claims used no AI.

Full Ai Generation 429
Ai Editing Inpainting 90
Face Swap Deepfake 76
Screenshot Fabrication 42
Composite Collage 25
Text Label Manipulation 14
Ai Enhancement 7

By Subject Category

Who or what is being targeted? Click to filter claims.

Political Figures 866
Military Conflict 357
Celebrity Entertainment 169
Crime Justice 135
Religious Ethnic 105
Protest Social Unrest 66
Business Corporate 55
Law Enforcement 52
Health Science 52
Immigration 44
Disaster Emergency 43
Technology 36
Wildlife Nature 32
Historical Fabrication 14
Sports 8
Scam Fraud 3
Conspiracy Theory 2
Entertainment 1

By Likely Intent

Why were these fakes created? Click to filter claims.

Political Manipulation 1155
Engagement Bait 223
Outrage Division 184
Disinformation Campaign 157
Fear Mongering 124
Satire Humor 77
Scam Fraud 61
Emotional Manipulation 53
Conspiracy Theory 19
Propaganda 17
Misinformation Campaign 7
Cultural Exploitation 4
Sympathy 1

By Geographic Target

Where are these fakes aimed? Click to filter claims.

South Asia 648
Global 421
North America 333
Middle East 297
Southeast Asia 172
Europe 95
Oceania 52
Latin America 50
Africa 30
East Asia 13

By Debunking Method

How were these fakes identified?

Source Verification 1268
Ai Detection Tools 274
Visual Artifact Analysis 263
Reverse Image Search 131
Data Fact Check 64
Expert Consultation 63
Official Records 32
Contextual Impossibility 19
Multiple Methods 7

By Platform

Where were these fakes distributed? 1671 claims spread across multiple or unidentified platforms.

Facebook 251
X Twitter 212
Tiktok 21
Instagram 19
Youtube 12
Whatsapp 2
Telegram 1

How advanced is the deception?

Sophistication of misinformation ranges from crude fabrication to highly polished, AI-enhanced content designed to evade detection.

Low 1254
Medium 669
High 201

Which emotions are exploited?

Misinformation is designed to trigger specific emotional responses. Understanding the emotional vector reveals the strategy behind the deception.

Outrage
930
Fear
494
Humor
116
Sympathy
103
Hope
89
Disgust
85
Admiration
80
Grief
43
Patriotism
34
Greed
1
Engagement Bait
1
Surprise
1

Where This Data Comes From

This report aggregates fact-checked claims from 41 independent fact-checking organizations worldwide via the Google Fact Check Tools API. These organizations are signatories of the International Fact-Checking Network (IFCN) code of principles and follow transparent verification methodologies. Claims cover all types of misinformation — not just AI-generated images, but also false text claims, conspiracy theories, misleading statistics, out-of-context media, and more.

Snopes
273 claims reviewed
273
Lead Stories
240 claims reviewed
240
AFP
176 claims reviewed
176
The Quint
167 claims reviewed
167
Press Trust of India
146 claims reviewed
146
NewsMobile
136 claims reviewed
136
Newschecker
126 claims reviewed
126
FACTLY
111 claims reviewed
111
Rappler
107 claims reviewed
107
AFP Fact Check
95 claims reviewed
95
Full Fact
90 claims reviewed
90
AAP
53 claims reviewed
53
Unknown
51 claims reviewed
51
BOOM Fact Check
48 claims reviewed
48
VERA Files
48 claims reviewed
48
DigitEye India
46 claims reviewed
46
India Today
44 claims reviewed
44
Rumor Scanner
37 claims reviewed
37
Alt News
30 claims reviewed
30
StopFake
28 claims reviewed
28
Vishvas News
17 claims reviewed
17
PolitiFact
15 claims reviewed
15
DW.com
15 claims reviewed
15
FactCheckHub
14 claims reviewed
14
Lighthouse Journalism
13 claims reviewed
13
FactCheck.org
11 claims reviewed
11
Dismislab
9 claims reviewed
9
Fact Crescendo Sri Lanka
7 claims reviewed
7
Medical Dialogues
6 claims reviewed
6
TeluguPost
5 claims reviewed
5
Science Feedback
5 claims reviewed
5
Annie Lab
4 claims reviewed
4
Boom Live
4 claims reviewed
4
THIP Media
2 claims reviewed
2
Australian Associated Press
2 claims reviewed
2
AP News
1 claim reviewed
1
dw.com
1 claim reviewed
1
Debunking Misinformation
1 claim reviewed
1
YouTurn
1 claim reviewed
1
Africa Check
1 claim reviewed
1

How claims are collected: The Google Fact Check API indexes claims from fact-checking organizations that publish ClaimReview structured data. We query the API with broad search terms to capture all available fact-checks from the reporting period. Each claim is then categorized using Gemini AI by type, method, subject, intent, geographic target, severity, sophistication, and emotional vector.

All 2189 Analyzed Claims

Every fact-checked claim from this period, ranked by severity. Click any tag to filter by category.

Filtered by:

(AD) Do you want Henk van Ess to visit your company for a brilliant workshop?

About This Report

Data Source

Claims are sourced from the Google Fact Check Tools API, which indexes fact-check articles from IFCN-certified organizations worldwide. The API is query-based — there is no way to retrieve a complete list of all fact-checked claims. To maximize coverage, we run 65+ different search queries (broad terms like "fact check", "viral", "fake"; topic-specific terms like "election", "health", "deepfake"; regional terms like "India", "Africa", "Brazil"; and platform names like "Facebook", "TikTok", "WhatsApp"), each returning up to 100 results with pagination. This yields a large sample but is not a complete census of all fact-checked content published in the period.

Multi-Reviewer Claims

When the same claim was reviewed by multiple fact-checking organizations, all reviewers are shown on that claim's card. Claims are merged by matching claim text, so a story checked by e.g. Snopes, PolitiFact, and AFP Fact Check appears once with all three linked. The number of claims in this report therefore represents unique stories, not unique articles.

Classification

Each claim is categorized by type, generation method, subject, intent, geographic target, severity (1–5), sophistication, and emotional vector using Gemini 2.0 Flash AI classification. Severity ratings reflect potential real-world impact (1 = quickly debunked satire, 5 = could incite violence or influence elections). This report covers all types of misinformation — AI-generated images, deepfakes, false text claims, conspiracy theories, misleading statistics, out-of-context media, fabricated quotes, fake screenshots, and more.

Source Reports

This yearly report was aggregated from 6 source reports: weekly_report_2026-01-25.json, weekly_report_2026-02-09.json, weekly_report_2026-02-15.json, weekly_report_2026-02-18.json, weekly_report_2026-03-01.json, weekly_report_2026-03-15.json. Claims appearing in multiple source reports are deduplicated so each unique story is counted once.

Limitations

Because the Google Fact Check API requires search terms, claims that do not match any of our query terms will not appear. English-language results are prioritized (languageCode=en). The sample skews toward claims that use common misinformation-related vocabulary. Regional coverage depends on whether local fact-checkers publish in English and are indexed by Google. AI classification may occasionally miscategorize edge cases.

Data from 41 fact-checking organizations
Report generated 2026 covering 111 days