FR

We advance AI safety through collaboration, research, and education.

What we do

Since 2024 · 275+ subscribers

Guaranteed Safe AI Seminars Monthly technical seminars on quantitative and guaranteed safety approaches. Featuring leading researchers including Yoshua Bengio, Steve Omohundro, Tan Zhi Xuan, and Jobst Heitzig.

Since 2025 · 1600+ members

AI Safety in Montréal Local field‑building hub serving the Montréal AI safety, ethics & governance community. Meetups, coworking sessions, targeted workshops, advising, and collaborations.

Since 2025

Canadian AI Safety Coordination Coordination group between Canadian orgs and network working towards AI safety.

Since 2018 · 400+ researchers

AI Safety Unconference Participant-driven events featuring talks, sessions, moderated discussions, and 1:1s. Organized AI Safety Unconference @ NeurIPS (2018–2022), Virtual AI Safety Unconference (2024), and hybrid edition planned for 2026.

Since 2024

Horizon Events Curating multiple global AI safety event series. Supporting the broader ecosystem of AI safety events and initiatives.

Track record

We’ve engaged thousands of participants across events and channels since 2018, building a community now with 1600+ members in Montréal, and facilitated cross-organizational collaboration globally.

2018–2022

AI Safety Unconference @ NeurIPS Years of participant-driven convenings featuring lightning talks, moderated discussions, and 1:1 sessions. 60+ participants per event from leading organizations including Anthropic, DeepMind, OpenAI, Mila, MIRI, MIT, Stanford, Oxford, Cambridge, and more.

2024–present

Guaranteed Safe AI Seminars Monthly technical talks with 15–30 live attendees per session, and 600+ total registrations annually. Featured speakers include Yoshua Bengio, Steve Omohundro, Tan Zhi Xuan, and Jobst Heitzig.

2025–present

Montréal AI safety community Built local ecosystem through meetups, coworking sessions, targeted workshops, and co-running the Mila AI safety reading group (biweekly sessions with 10–20 researchers). Serving members across AI safety, ethics, and governance.

2022, 2025

Limits to Control workshop Co-organized workshop focused on difficulties or impossibilities of controlling advanced AI systems. The 2025 edition brought together researchers to map the territory of AI control limitations and produced a collective statement.

2023–present

AI Safety Events & Training newsletter Founded its Substack in 2023, contributing events curation and community growth support.

What participants say

Helpful for keeping up with the cutting edge and for launching collaborations. Haydn Belfield, Google DeepMind

Very useful to meet and talk with AI safety researchers at NeurIPS. Esben Kran, Apart Research

A great way to meet the best people in the area and propel daring ideas forward. Stuart Armstrong, Aligned AI

Small discussion groups exposed me to new perspectives. Adam Gleave, FAR.AI

Get involved

Volunteer We welcome volunteers for seminar operations, research briefs, speaker outreach, and video editing. Training and templates provided.

Partner We collaborate with universities, labs, NGOs, and standards bodies to co‑host sessions, share speakers, and to build pilots.

Support Your support enables us to expand collaboration, events, and research. Sponsorships, grants, and in‑kind contributions (venue hosting, captioning, editing, design) welcome.

Advisors Seeking senior advisors across verification, evaluations, and governance. Conflict of interest policy applies.

Follow Stay updated on AI safety events, training opportunities, and our latest initiatives. Subscribe to our newsletter and follow us on social media.

Acknowledgements

Contributors

Orpheus LummisFounder
Étienne LangloisAI safety coordination & strategy
Linda LinseforsAdvisor, events & AI safety
Arjun YadavGeneralist support & events
Manu GarcíaCommunications specialist & event coordinator
Pascal HuynhEvent & interaction design
Nicolas GrenierAdvisor, worlding
Richard MallahAdvisor, AI Safety Unconference series
Mario GibneyAdvisor, AI safety field-building
Diego JiménezAI strategy & events operations
Vaughn DiMarcoAdvisor, AI Safety Unconference series
David KruegerCo-organizer, AI Safety Unconference series

Funders & sponsors

We’re funding‑constrained. Donations are appreciated and go toward high‑impact projects. Donate.

Contact

We’d love to hear from you. Reach out to discuss collaborations, ask questions, or explore opportunities to work together on AI safety initiatives.

Email usSchedule a call