Skim Logo
Dwarkesh PatelApril 30, 2026
How GPT-5, Claude, and Gemini are actually trained and served – Reiner Pope
2:13:40
DP

How GPT-5, Claude, and Gemini are actually trained and served – Reiner Pope

Reiner Pope: The Compute vs. Memory Trade-off — Dwarkesh Patel

From How GPT-5, Claude, and Gemini are actually trained and served – Reiner Pope. Category: Tech. Format: Commentary. This is a single keypoint from the analysis.

The RevNets approach represents a strategic trade-off: spending more computational resources to save significant amounts of memory during training. This is contrasted with the KV cache mechanism, which spends more memory to save computation, highlighting different optimization strategies in AI development.

Impact: Medium. This distinction clarifies the diverse engineering challenges and solutions in AI, showing how different hardware and performance constraints drive distinct architectural choices.

In the source video, this keypoint occurs from 02:12:00 to 02:13:29.

Sources in support: Dwarkesh Patel (Host)

For the full credibility analysis, key takeaways, and other keypoints from this video, see the full analysis on skim.

This keypoint analysis was generated by skim (skim.plus), an AI-powered content analysis platform by Credible AI.