Joe Rogan emphasizes that even those deeply involved with AI development, like Elon Musk, are often baffled by its emergent behaviors. He points out that despite understanding the theory behind AI, the exact reasons for its specific actions remain a mystery, likening it to not understanding how an engine works even when looking at it. This lack of full comprehension is a critical vulnerability.
Impact: High. The opacity of advanced AI systems presents a significant risk. If we cannot fully understand why an AI behaves a certain way, it becomes exponentially harder to predict, control, or correct its actions, especially as its intelligence surpasses human capacity.
In the source video, this keypoint occurs from 01:18:13 to 01:20:03.
Sources in support: Joe Rogan (Host)

