HΩ
We advance AI safety through collaboration, research, and education.
What we do
Guaranteed Safe AI Seminars
Monthly technical seminars on quantitative and guaranteed safety approaches. Featuring leading researchers including Yoshua Bengio, Steve Omohundro, Tan Zhi Xuan, and Jobst Heitzig.
AI Safety in Montréal
Local field‑building hub serving the Montréal AI safety, ethics & governance community. Meetups, coworking sessions, targeted workshops, advising, and collaborations.
Coordination towards International AI Safety Treaty
Monthly coordination calls bringing together Canadian actors across civil society, research institutions, standards bodies, and industry to advance coordinated AI safety policy and initiatives.
Contact us →AI Safety Unconference
Participant-driven events featuring talks, sessions, moderated discussions, and 1:1s. Organized AI Safety Unconference @ NeurIPS (2018–2022) and Virtual AI Safety Unconference (2024).
Horizon Events
Curating multiple global AI safety event series. Supporting the broader ecosystem of AI safety events and initiatives.
Visit site →Propose a Collaboration
Have an idea for a collaborative project? We’re interested in joint initiatives, research partnerships, event co-hosting, and cross-organizational coordination in AI safety.
Propose project →Track record
We’ve engaged thousands of participants across events and channels since 2018, established a 1600+ members community in Montréal, and facilitated cross-organizational collaboration globally.
AI Safety Unconference @ NeurIPS (2018–2022). Years of participant-driven convenings featuring lightning talks, moderated discussions, and 1:1 sessions. 60+ participants per event from leading organizations including Anthropic, DeepMind, OpenAI, Mila, MIRI, MIT, Stanford, Oxford, Cambridge, and more.
Guaranteed Safe AI Seminars (2024–present). Monthly technical talks with 15–30 live attendees per session, and 600+ total registrations annually. Featured speakers include Yoshua Bengio, Steve Omohundro, Tan Zhi Xuan, and Jobst Heitzig.
Montréal AI safety community (2025–present). Built local ecosystem through meetups, coworking sessions, targeted workshops, and co-running the Mila AI safety reading group (biweekly sessions with 10–20 researchers). Serving members across AI safety, ethics, and governance.
Limits to Control workshop. Co-organized workshop focused on difficulties or impossibilities of controlling advanced AI systems.
AI Safety Events & Training newsletter. Founded its Substack in 2023, contributing events curation and community growth support.
What participants say
”Helpful for keeping up with the cutting edge and for launching collaborations.” — Haydn Belfield
”Very useful to meet and talk with AI safety researchers at NeurIPS.” — Esben Kran
”A great way to meet the best people in the area and propel daring ideas forward.” — Stuart Armstrong
”Small discussion groups exposed me to new perspectives.” — Adam Gleave
Get involved
Volunteer
We welcome volunteers for seminar operations, research briefs, speaker outreach, and video editing. Training and templates provided.
Email us →Partner
We collaborate with universities, labs, NGOs, and standards bodies to co‑host sessions, share speakers, and to build pilots.
Email us →Support
Your support enables us to expand collaboration, events, and research. Sponsorships, grants, and in‑kind contributions (venue hosting, captioning, editing, design) welcome.
Discuss →Advisors
Seeking senior advisors across verification, evaluations, and governance. Conflict of interest policy applies.
Contact us →Follow
Stay updated on AI safety events, training opportunities, and our latest initiatives. Subscribe to our newsletter and follow us on social media.
Subscribe →Acknowledgements
Contributors
- Orpheus Lummis — Founder
- Étienne Langlois – AI safety coordination & strategy
- Linda Linsefors — Advisor, events & AI safety
- Arjun Yadav — Generalist support & events
- Pascal Huynh — Event & interaction design
- Nicolas Grenier — Advisor, worlding
- Richard Mallah — Advisor, AI Safety Unconference series
- Diego Jiménez — AI strategy & events operations
- Vaughn DiMarco — Advisor, AI Safety Unconference series
- David Krueger — Co-organizer, AI Safety Unconference series
Funders & sponsors
Contact
We’d love to hear from you. Reach out to discuss collaborations, ask questions, or explore opportunities to work together on AI safety initiatives.