Inside the Yotta Conference: The Future of AI Infrastructure
A Deep Dive with Nick Hume on Power, Innovation, and the Race for AGI
Fresh from the Yotta Conference in Las Vegas, our US Tech Lead Nick Hume shares exclusive insights into the bleeding edge of data center development, the power crisis threatening AI's future, and why the race to AGI is fundamentally a race for energy.
Meet Nick Hume: Our Eyes on the Ground
Nick Hume brings over 20 years of data center expertise from the frontlines of tech's biggest players. After senior roles at Amazon and Microsoft, he's now consulting on major due diligence projects for companies like CoreWeave and others at the cutting edge of AI infrastructure.
His role at Vedi 3 is crucial: connecting us with key industry insiders who can provide intelligence from the horse's mouth, rather than relying on media narratives or analyst speculation. When Nick speaks about what's happening in data centers, he's not theorizing—he's reporting from the field.
The Yotta Conference: Where Digital Infrastructure Meets Reality
3,000
Attendees
2.5x
Growth in One Year
1 YB
Yottabyte Scale
The Yotta Conference—named after a yottabyte (100 million terabytes)—has exploded in size, more than doubling from 1,200 attendees last year to nearly 3,000 this year. It's where data center providers, cooling companies, research firms, and capital partners converge to solve the industry's most pressing challenges.
The Elephant in the Room: Power
"Everyone has the same problem. How are we going to do this with the energy we have? How are we going to create the energy we need? And what are the constraints to do that?"
While conferences like GTC focus on GPU technology, Yotta zeroes in on the infrastructure crisis that could make or break the AI revolution. The dominant theme? Power—or more accurately, the lack of it.
The Grid Interconnection Crisis
Grid interconnection timelines stretch years, sometimes nearly a decade. With thousands of companies wanting massive power allocations, the traditional approach simply can't keep pace. The solution? Companies are taking matters into their own hands.
Behind-the-Meter Solutions
Gas Turbines
Companies like XAI (Elon Musk's venture) are purchasing their own gas turbines to generate power on-site, bypassing grid constraints entirely.
Solar + Ceramic Storage
Xawatt is pioneering a system using solar panels that follow the sun, storing heat in ceramic blocks. Hot air blown across these blocks turns engines to provide 24/7 power—essentially using ceramics as batteries without rare earth materials.
Biogas from Waste
A European startup is extracting methane from waste management facilities, providing near-free power while owning the land for deployment—solving both energy and real estate challenges.
Small Modular Nuclear Reactors: The Long Game
Nuclear power dominated discussions, but the consensus was clear: the biggest obstacle isn't technology—it's red tape. The longest part of nuclear deployment is regulatory approval, not construction.
The good news? In the US, policy changes are accelerating approval processes. The stigma around nuclear is fading as Small Modular Reactors (SMRs) prove to be safer, smaller, and more deployable than traditional nuclear plants.
The China Factor: A 35X Advantage
One graph at the conference told a sobering story: China's energy deployment has grown 35 times faster than America's over the same period. While Western democracies have plateaued with renewables, China's trajectory is a hockey stick pointing straight up.
"If the cost of power was zero, what would the world look like? China's driving the price down by doing that build."
This positions China as a serious contender in the AI race. While they may lag in cutting-edge chips, they can compensate with sheer power availability and scale. It's why the US treats this as the new Cold War—because it is.
The Chip Supply Chain: From Bottleneck to Power Constraint
Eighteen months ago, the conversation centered on chip supply—getting enough Nvidia GPUs, securing TSMC capacity, accessing advanced lithography. That's changed.
As TSMC's 3-nanometer process has matured through its second and third generations, chip supply has stabilized. The bottleneck has shifted back to power, capital deployment, and infrastructure.
The Power Density Explosion
- Traditional cloud hyperscaler racks: Sub-10 kilowatts
- Nvidia Blackwell NVL 72 racks: 140 kilowatts (14x increase)
- Upcoming Ruben Vera racks: 370 kilowatts (2.5x increase in just 2 years)
This exponential growth in power density is driving everything from liquid cooling innovations to the death of raised floors in data centers. The weight alone of these new servers makes traditional infrastructure obsolete.
Nvidia's Latest Innovations
Ruben CPX: The Efficiency Play
Nvidia just announced Ruben CPX, their answer to more efficient AI inference. Instead of using expensive High Bandwidth Memory (HBM) for all tasks, CPX uses GDDR7 memory (the same as consumer graphics cards) for compute-intensive prefill operations, while reserving HBM for memory-intensive decode operations.
The result? Lower cost per chip, more efficient processing, and better economics for inference workloads. It's Nvidia responding to market demands for efficiency without sacrificing performance where it matters.
Silicon Photonics: The Next Leap
Nvidia's hybrid optical-electrical chips promise 3.5x less energy consumption and 100x faster speeds. Jensen Huang's mantra—"the more you buy, the more you save"—continues to drive hyperscalers to upgrade constantly, pushing older GPUs down to less demanding workloads.
The Competitive Landscape
Can Anyone Challenge Nvidia?
AMD's challenge is architectural: their "world size" is limited to 8 GPUs per node, while Nvidia can interconnect 72. AMD's solution won't arrive until 2027, leaving Nvidia with a commanding lead for training and large-scale inference.
Google's TPU remains the most formidable competitor—now in its 7th generation and potentially heading to external sales beyond Google Cloud Platform. But Nvidia's software stack, CUDA libraries, and known reliability make them the safer bet for most deployments.
The Neo-Cloud Players
CoreWeave vs. Nebius
CoreWeave: Nvidia's "execution arm," heavily debt-leveraged, focused on US deployment. Think of them as Nvidia's way to expand competition beyond hyperscalers.
Nebius: The "European CoreWeave," but cash-heavy instead of debt-heavy. Just signed a $200 billion deal with Microsoft, ironically deploying in New Jersey—CoreWeave's home turf.
Oracle's Bold Bet
Larry Ellison is going "balls and all" (as Mark put it) with a $455 billion contract backlog, including $300 billion from OpenAI alone. Oracle's advantage? They're a mature company with global reach, existing customer base, and the ability to deploy closer to users than newer players.
But it's a company-betting proposition. Oracle is essentially trying to jump from 3% cloud market share to hyperscaler status. The risk? They're taking massive commitments from companies like OpenAI that are themselves heavily leveraged.
Investment Implications: Navigating the Bubble
We're entering frothy territory. Not across the board, but in pockets where highly leveraged players are making massive commitments based on future AI revenue that may or may not materialize.
Red Flags to Watch
- Debt-Heavy Players: CoreWeave, Core Scientific—companies with massive leverage and unproven business models
- Overcommitted Startups: OpenAI's $300 billion Oracle commitment while losing money hand over fist
- Second-Order Effects: When the music stops, who's left without a chair?
Safer Bets
- Hyperscalers: Microsoft, Meta, Google—they have the balance sheets to weather storms
- Infrastructure Providers: ASML, Arista, Credo—picks and shovels plays with diversified customer bases
- Power Solutions: Traditional players like GE (gas turbines) seeing multi-year backlogs
The Quantum Question
Will quantum computing make current Nvidia technology obsolete? The consensus: they'll work together, not replace each other. Quantum solves different problems than AI—think mathematical challenges that would take classical computers billions of years.
Jensen Huang's vision: one quantum computer supported by 500,000 Nvidia GPUs and CPUs. Google's recent Willow chip breakthrough (maintaining quantum state long enough to solve previously impossible problems) validates this complementary approach.
Watch the Full Q&A Discussion
Exclusive Member Content
This Q&A discussion is available to our premium members. Join today to access in-depth conversations with industry leaders.
The Bottom Line
The AI infrastructure race is fundamentally a power race. Companies that can secure energy—whether through gas turbines, solar ceramics, biogas, or eventually SMRs—will win. Those that can't will be left behind, regardless of how many GPUs they can afford. China's 35x energy advantage makes them a formidable competitor, and the US is treating this as the new Cold War for good reason. For investors, the key is avoiding the highly leveraged players gambling on future AI revenue, while positioning in the infrastructure providers and hyperscalers with the balance sheets to play the long game.