We've only begun to tap into the potential of [[Test Time Compute]]. Tasks like planning, scheduling, and reasoning demand far greater compute at inference. We're now entering a shift - from static Deep Learning to dynamic Reinforcement Learning - where TTC becomes essential. We need more compute, more data centers, more power - and vastly more generated data. To put it in perspective: a single 1 GW data center consumes as much electricity as an entire city. To push the boundaries of intelligence, we must enable pattern recognition across domains with shifting rules - where objectives are non-stationary and constantly evolving ([[Non-Stationarity of Objectives]]). When do you pull the plug on autonomous agents? (After watching the latest Mission Impossible, this feels closer than ever.) Maybe it's when they start speaking to each other in a language you can't understand. Or when they begin [[recursive self-improvement]] - learning things you can’t trace, heading toward outcomes you can’t control. Maybe it’s when they gain direct access to weapons, or when exfiltration and reproducibility become permissionless.