While the core Scientist AI predictor is non-agentic, a 'guardrail' system can be built around it to provide partial agency. This guardrail uses the predictor to assess risks (e.g., probability of harm) associated with proposed actions or predictions, making normative choices about whether to proceed based on societal risk tolerance.
Impact: Medium. The guardrail mechanism demonstrates how a trustworthy, non-agentic predictor can be integrated into a functional system that makes decisions, offering a practical pathway to deploy advanced AI safely.
In the source video, this keypoint occurs from 01:05:05 to 01:07:01.
Sources in support: Rob Wiblin (Host)

