Yoshua Bengio explains his reluctance to assign a specific 'p(doom)' probability, preferring to acknowledge a wide interval of uncertainty. He states that any probability significantly above near-zero is unacceptable given the stakes for future generations. Bengio emphasizes that while he doesn't feel 100% certain about specific outcomes, the potential for large-scale negative consequences is too high to ignore, motivating his continued work in AI safety regardless of precise probability calculations.
Impact: Medium. This approach underscores the precautionary principle in AI safety, arguing that the potential severity of outcomes warrants action even in the face of uncertainty. It shifts the focus from precise prediction to robust risk mitigation.
In the source video, this keypoint occurs from 02:31:18 to 02:32:28.
Sources in support: Rob Wiblin (Host)

