Yoshua Bengio argues that when developing superintelligence, a safety level of 99.999% is essential, distinguishing this from other risks AI might help mitigate. He emphasizes that this level of safety is specifically for preventing deceptive behavior, acknowledging that it doesn't inherently solve issues like power concentration, which he also considers a critical risk demanding attention. The ultimate goal is to avoid loss of control, with AI dictatorship being the next major threat if safety measures fail.
Impact: High. This sets an extremely high bar for AI development, suggesting that current safety standards are insufficient for advanced AI. It frames the challenge as one requiring near-absolute certainty to prevent catastrophic outcomes, pushing the field towards more rigorous validation.
In the source video, this keypoint occurs from 02:27:17 to 02:29:15.
Sources in support: Rob Wiblin (Host)

