Canadian AI Incident Monitor
The Canadian AI Incident Monitor (CAIM) is a bilingual, public-interest database that documents AI incidents and hazards affecting Canada. It structures scattered information about AI failures, near-misses, and emerging risks into a searchable, citable evidence base with API access.
CAIM is currently in pilot. Records and analytical assessments are provisional pending peer review.
Visit CAIM ↗What CAIM does
- Documents AI incidents and hazards with a Canada nexus, using structured records with transparent sourcing and verification levels
- Publishes a searchable database with structured API access, aligned with international frameworks (OECD, AIID) for cross-border comparability
- Tracks governance responses to each incident: whether a regulator investigated, what they found, and what changed
Why a monitor
Incident monitors exist in aviation, medical devices, and chemical safety because individual failures are anecdotes, but aggregated and structured, they become evidence. CAIM applies this model to AI in Canada.
Canada deploys AI systems across healthcare, public services, law enforcement, and critical infrastructure. When these systems fail, information about the failures is fragmented across news coverage, regulatory filings, court records, and institutional memory. CAIM consolidates that information into a structured, well-sourced, publicly accessible evidence base.
It also adds national depth that cross-country monitors cannot provide: francophone sources, provincial and municipal events, government automated decision systems, jurisdictional mapping, and tracking of whether governance responses actually worked.