OpenAI is trying to beat Anthropic with the private-equity firms, having reportedly introduced a new guaranteed minimum return of 17.5% and early access to its newest models.
That pitch is aimed at firms including TPG and Advent, but this is really a fight over distribution.
You see, how the world works is that these buyout firms own big groups of established private companies, so this gives OpenAI a faster way to roll out tools, lock in usage, and build stickier revenue before possible public listings as early as this year, though if we’re being real, Anthropic is winning this clear off.
The joint-venture setup also helps cover the heavy upfront cost of putting AI inside large companies. That work usually needs engineers to customize models for each client, and that burns cash fast.
Both OpenAI and Anthropic are now racing to sign these private-equity partnerships, and that kind of contest is still pretty new in AI.
Meanwhile, speaking earlier this month at BlackRock’s US Infrastructure Summit in Washington, OpenAI’s CEO Sam Altman said, “Anything at this scale, it’s just like so much stuff goes wrong.”
He pointed to a severe weather event at a data center campus in Abilene, Texas, that temporarily “brought things down” since that campus is the flagship site of the $500 billion Stargate project involving OpenAI, Oracle, and SoftBank.
Last month, OpenAI hit a valuation of $730 billion in a record fundraising round, right after it backed away from some huge spending plans, shelved some bigger ambitions, and accepted that it may be better off buying giant amounts of cloud capacity instead of trying to build enormous data centers itself.
That change does not make the competition easier. OpenAI still has to keep up with Anthropic, Google, and other companies building models, apps, and features.
The problem is that training and running AI models take massive amounts of chips, processing power, memory, and energy. Sam and other executives have been saying for years that compute is one of the company’s biggest bottlenecks.
Even with that, OpenAI has kept raising staggering amounts of money, including $110 billion earlier this year, with $50 billion of that coming from Amazon.
Sam wrote on X in November that OpenAI and other companies “have to rate limit our products and not offer new features and models because we face such a severe compute constraint.”
Before that, much of the story around OpenAI was about how aggressively Sam was trying to secure capacity.
The company signed a run of multibillion-dollar infrastructure deals with Nvidia, Advanced Micro Devices, and Broadcom. In that same November post, Sam said OpenAI was looking at commitments of roughly $1.4 trillion over the next eight years.
How does a company with $13.1 billion in yearly revenue (that by the way has not been made yet) take on commitments that huge? The latest financing round added even more capacity deals.
As part of the $110 billion funding package announced last month, OpenAI agreed to use about 2 gigawatts of Trainium capacity through Amazon Web Services. Trainium is AWS’s custom AI chip, and Amazon introduced Trainium3 in December.
Nvidia also joined the round with a $30 billion investment. OpenAI said it expanded its relationship with Nvidia and agreed to use 3 gigawatts of dedicated inference capacity and 2 gigawatts of training capacity on Nvidia’s coming Vera Rubin systems.
If you’re reading this, you’re already ahead. Stay there with our newsletter.