Canadian AI Incident Monitor

Since 2026

The Canadian AI Incident Monitor (CAIM) is a bilingual, public-interest database that documents AI incidents and hazards affecting Canada. It structures scattered information about AI failures, near-misses, and emerging risks into a searchable, citable evidence base with open API access.

Visit CAIM ↗

What CAIM does


Why a monitor

Incident monitors exist in aviation, medical devices, and chemical safety because individual failures are anecdotes, but aggregated and structured, they become evidence. CAIM applies this model to AI in Canada.

Canada deploys AI systems across healthcare, public services, law enforcement, and critical infrastructure. When these systems fail, information about the failures is fragmented across news coverage, regulatory filings, court records, and institutional memory. CAIM consolidates that information into a structured, well-sourced, publicly accessible evidence base.

It also adds national depth that cross-country monitors cannot provide: francophone sources, provincial and municipal events, government automated decision systems, jurisdictional mapping, and tracking of whether governance responses actually worked.


Current status

CAIM is in its founding phase. The editorial framework, record schema, and initial records are established. Records and analytical assessments have not yet been peer-reviewed and should be treated as provisional.