Skim Logo
80,000 Hours15 hours ago
Godfather of AI: How To Make Safe Superintelligent AI – Yoshua Bengio
2:35:26
8H

Godfather of AI: How To Make Safe Superintelligent AI – Yoshua Bengio

Embracing Uncertainty: Beyond P(Doom) — 80,000 Hours

From Godfather of AI: How To Make Safe Superintelligent AI – Yoshua Bengio. Category: Tech. Format: Interview. This is a single keypoint from the analysis.

Yoshua Bengio explains his reluctance to assign a specific 'p(doom)' probability, preferring to acknowledge a wide interval of uncertainty. He states that any probability significantly above near-zero is unacceptable given the stakes for future generations. Bengio emphasizes that while he doesn't feel 100% certain about specific outcomes, the potential for large-scale negative consequences is too high to ignore, motivating his continued work in AI safety regardless of precise probability calculations.

Impact: Medium. This approach underscores the precautionary principle in AI safety, arguing that the potential severity of outcomes warrants action even in the face of uncertainty. It shifts the focus from precise prediction to robust risk mitigation.

In the source video, this keypoint occurs from 02:31:18 to 02:32:28.

Sources in support: Rob Wiblin (Host)

For the full credibility analysis, key takeaways, and other keypoints from this video, see the full analysis on skim.

This keypoint analysis was generated by skim (skim.plus), an AI-powered content analysis platform by Credible AI.