If you’ve tried to track the blizzard of OpenAI partnership announcements lately, you might feel like you’re watching the most ambitious infrastructure strategy ever conceived, or a Three Stooges-style routine where everyone keeps paying the same $20 back and forth…except now it’s $20 billion.
What’s happening isn’t just deal-making; it’s a new economic model in real time, where cloud giants, chipmakers, and an AI lab are effectively co-financing the future of compute.
The Dealbook (Last ~6 Months)
|
Partner |
Deal Value |
|
AWS |
~$38B cloud and compute commitment |
|
Oracle |
~$300B multi-year cloud infrastructure partnership |
|
CoreWeave |
~$22B compute contract (after $6.5B expansion) |
|
Nvidia |
~$100B infra + strategic investments and supply pipeline |
|
AMD |
Multi-billion chip supply + equity stake (~10% reported) |
|
Broadcom |
~$10B co-development & networking partnership |
That isn’t hype-cycle spending, that’s nation-state level capital allocation to train and deploy frontier models. For scale, global cloud infrastructure capex last year was roughly $400B.
Why All the Money?
Two words: compute scarcity.
Cloud vendors need AI workloads; OpenAI needs the cloud; chipmakers need anchor customers; and investors need to keep the flywheel spinning. So money moves in a loop: Cloud funds GPUs → GPU vendors invest in AI → AI workloads commit to cloud → repeat.
It’s less a buyer-seller market than a closed-loop AI economy where access to compute matters more than classic balance-sheet logic.
Who’s Winning Right Now?
- Nvidia: Prints money, sells shovels in a gold rush, and gets strategic leverage over the stack.
- Oracle: Goes from cloud outsider to preferred capacity partner overnight. The sleeper winner.
- AWS: Back in the OpenAI mix after losing exclusivity to Microsoft and framing it as a scale play.
- OpenAI: Buys optionality and multi-cloud leverage. Potentially the first AI lab to operate like a sovereign compute entity.
Who Slipped?
- Microsoft: Still central, but no longer the sole highway. Their thesis of exclusive alignment changed fast.
- Anyone betting compute supply would normalize this year: Spoiler…it didn’t.
Why This Matters
This isn’t just horse-trading. It’s a signal: The biggest cost in AI isn’t model development, it’s running the models.
Compute is now a strategic asset class. And unlike software, it doesn’t scale for free.
The Bottom Line
We may look back at this period as the moment the AI stack became its own economy. Where hyperscalers, chipmakers, and AI labs all bankroll each other to secure computational dominance.
It’s ambitious, messy, and occasionally comedic. But it’s also a preview of what the future looks like.
And if it feels like the Three Stooges swapping IOUs… just remember: these IOUs build data centers the size of football stadiums.
EMA’s Take
The AI frontier has shifted from algorithmic edge to infrastructure edge. The winners won’t just be the teams with the best models, but those with guaranteed compute, efficient inference pipelines, and diversified supply lines. The next advantage isn’t clever prompts; it’s predictable power.

