Skim Logo
80,000 Hours15 hours ago
Godfather of AI: How To Make Safe Superintelligent AI – Yoshua Bengio
2:35:26
8H

Godfather of AI: How To Make Safe Superintelligent AI – Yoshua Bengio

The Guardrail: Agentic Control for Safety — 80,000 Hours

From Godfather of AI: How To Make Safe Superintelligent AI – Yoshua Bengio. Category: Tech. Format: Interview. This is a single keypoint from the analysis.

While the core Scientist AI predictor is non-agentic, a 'guardrail' system can be built around it to provide partial agency. This guardrail uses the predictor to assess risks (e.g., probability of harm) associated with proposed actions or predictions, making normative choices about whether to proceed based on societal risk tolerance.

Impact: Medium. The guardrail mechanism demonstrates how a trustworthy, non-agentic predictor can be integrated into a functional system that makes decisions, offering a practical pathway to deploy advanced AI safely.

In the source video, this keypoint occurs from 01:05:05 to 01:07:01.

Sources in support: Rob Wiblin (Host)

For the full credibility analysis, key takeaways, and other keypoints from this video, see the full analysis on skim.

This keypoint analysis was generated by skim (skim.plus), an AI-powered content analysis platform by Credible AI.