← Tous les séminaires Proving safety for narrow AI outputs Evan Miyazono · Atlas Computing juillet 2024 Identifies domains where AI can deliver capabilities with quantitative guarantees against objective safety criteria. Maps a path to generating software with formal proofs of specification compliance. Voir l'enregistrement ← Précédent Gaia: Distributed planetary-scale AI safety Suivant → Constructability: Designing plain-coded AI systems