Nvidia plans to invest up to $100 billion in the ChatGPT maker and supply at least 10GW of its systems to build out OpenAI’s next generation infrastructure, the companies said in coordinated announcements.
The package blends capital with a massive pipeline of compute, placing Nvidia as a preferred supplier while giving OpenAI a path to scale model training and inference far beyond today’s footprints.
The build begins with a first gigawatt of Nvidia systems coming online in the second half of 2026, anchored on the company’s forthcoming Vera Rubin platform.
It pushes meaningful new capacity into the back half of 2026 and beyond, which lines up with the industry’s expectation that the next leaps in model size and capability will require far more power, networking bandwidth, and memory than current clusters provide. The partners described the plan as a multi year rollout that could entail millions of GPUs.
A 10GW target signals a sweeping buildout of data centers and grid tie-ins at a time when power constraints are emerging as a primary bottleneck for AI.
For Nvidia, the roadmap deepens demand visibility for upcoming platforms and reinforces its full stack approach that bundles silicon with interconnects, software, and reference data center designs.
For OpenAI, the partnership adds an industrial scale compute track alongside its existing cloud relationships, reducing the risk that access to GPUs limits model development.
The second half 2026 start date underscores how long it takes to stand up leading edge AI capacity, from site selection and power procurement to delivery of networking gear and liquid cooling.
If schedules slip, model roadmaps and expected productivity gains could slide as well. If the rollout stays on track, the added throughput should support both frontier model training and a surge in inference, including enterprise copilots and agentic systems that run at scale.
A long dated commitment of this magnitude extends the company’s revenue pipeline into the Vera Rubin era and potentially beyond, even as rivals push alternative accelerators.
The agreement raises the bar for access to compute and may intensify competition for power and specialized labor.
More available inference capacity could expand addressable markets, though pricing for GPU time will depend on how quickly supply catches up with demand.
The companies have signaled interest in the UK and other regions for future sites, which could introduce policy and geopolitical variables into the buildout.
Any equity component would also recalibrate OpenAI’s cap table and could affect revenue sharing arrangements tied to future models.
The partnership codifies compute as the scarce asset in AI and gives OpenAI a multi year runway to chase larger and more capable systems.