What we do
Guaranteed Safe AI Seminars Monthly technical seminars on quantitative and guaranteed safety approaches. Featuring leading researchers including Yoshua Bengio, Steve Omohundro, Tan Zhi Xuan, and Jobst Heitzig.
AI Safety in Montréal Local field‑building hub serving the Montréal AI safety, ethics & governance community. Meetups, coworking sessions, targeted workshops, advising, and collaborations.
Canadian AI Safety Coordination Coordination group between Canadian orgs and network working towards AI safety.
AI Safety Unconference Participant-driven events featuring talks, sessions, moderated discussions, and 1:1s. Organized AI Safety Unconference @ NeurIPS (2018–2022), Virtual AI Safety Unconference (2024), and hybrid edition planned for 2026.
Horizon Events Curating multiple global AI safety event series. Supporting the broader ecosystem of AI safety events and initiatives.
Track record
We’ve engaged thousands of participants across events and channels since 2018, building a community now with 1600+ members in Montréal, and facilitated cross-organizational collaboration globally.
AI Safety Unconference @ NeurIPS Years of participant-driven convenings featuring lightning talks, moderated discussions, and 1:1 sessions. 60+ participants per event from leading organizations including Anthropic, DeepMind, OpenAI, Mila, MIRI, MIT, Stanford, Oxford, Cambridge, and more.
Guaranteed Safe AI Seminars Monthly technical talks with 15–30 live attendees per session, and 600+ total registrations annually. Featured speakers include Yoshua Bengio, Steve Omohundro, Tan Zhi Xuan, and Jobst Heitzig.
Montréal AI safety community Built local ecosystem through meetups, coworking sessions, targeted workshops, and co-running the Mila AI safety reading group (biweekly sessions with 10–20 researchers). Serving members across AI safety, ethics, and governance.
Limits to Control workshop Co-organized workshop focused on difficulties or impossibilities of controlling advanced AI systems. The 2025 edition brought together researchers to map the territory of AI control limitations and produced a collective statement.
AI Safety Events & Training newsletter Founded its Substack in 2023, contributing events curation and community growth support.
What participants say
Helpful for keeping up with the cutting edge and for launching collaborations. Haydn Belfield, Google DeepMind
Very useful to meet and talk with AI safety researchers at NeurIPS. Esben Kran, Apart Research
A great way to meet the best people in the area and propel daring ideas forward. Stuart Armstrong, Aligned AI
Small discussion groups exposed me to new perspectives. Adam Gleave, FAR.AI
Get involved
Volunteer We welcome volunteers for seminar operations, research briefs, speaker outreach, and video editing. Training and templates provided.
Partner We collaborate with universities, labs, NGOs, and standards bodies to co‑host sessions, share speakers, and to build pilots.
Support Your support enables us to expand collaboration, events, and research. Sponsorships, grants, and in‑kind contributions (venue hosting, captioning, editing, design) welcome.
Advisors Seeking senior advisors across verification, evaluations, and governance. Conflict of interest policy applies.
Follow Stay updated on AI safety events, training opportunities, and our latest initiatives. Subscribe to our newsletter and follow us on social media.
Acknowledgements
Contributors
| Orpheus Lummis | Founder |
| Étienne Langlois | AI safety coordination & strategy |
| Linda Linsefors | Advisor, events & AI safety |
| Arjun Yadav | Generalist support & events |
| Manu García | Communications specialist & event coordinator |
| Pascal Huynh | Event & interaction design |
| Nicolas Grenier | Advisor, worlding |
| Richard Mallah | Advisor, AI Safety Unconference series |
| Mario Gibney | Advisor, AI safety field-building |
| Diego Jiménez | AI strategy & events operations |
| Vaughn DiMarco | Advisor, AI Safety Unconference series |
| David Krueger | Co-organizer, AI Safety Unconference series |
Funders & sponsors
| Long‑Term Future Fund (EA Funds) |
| Survival and Flourishing Fund |
| Effective Altruism Foundation |
| Future of Life Institute |
We’re funding‑constrained. Donations are appreciated and go toward high‑impact projects. Donate.
Contact
We’d love to hear from you. Reach out to discuss collaborations, ask questions, or explore opportunities to work together on AI safety initiatives.
Email usSchedule a call