Skim Logo
Dwarkesh PatelApril 30, 2026
How GPT-5, Claude, and Gemini are actually trained and served – Reiner Pope
2:13:40
DP

How GPT-5, Claude, and Gemini are actually trained and served – Reiner Pope

The Memory Wall: A Growing Constraint — Dwarkesh Patel

From How GPT-5, Claude, and Gemini are actually trained and served – Reiner Pope. Category: Tech. Format: Commentary. This is a single keypoint from the analysis.

The increasing cost and scarcity of memory, particularly High Bandwidth Memory (HBM), are becoming a significant bottleneck for hyperscalers, impacting hardware design and potentially slowing down device upgrades. This 'memory wall' is a critical factor in current AI infrastructure development.

Impact: High. This constraint forces a re-evaluation of hardware design, pushing for more efficient memory utilization and potentially impacting the pace of AI advancement. The sheer scale of hyperscaler CapEx on memory underscores its critical role.

In the source video, this keypoint occurs from 01:02:14 to 01:04:00.

Sources in support: Dwarkesh Patel (Host), Horace He (Lecturer)

For the full credibility analysis, key takeaways, and other keypoints from this video, see the full analysis on skim.

This keypoint analysis was generated by skim (skim.plus), an AI-powered content analysis platform by Credible AI.