Reading Time: 9 minutes

“The future of artificial intelligence will be determined not only by algorithms, but by the gigawatts of energy and infrastructure that make those algorithms possible.” – MJ Martin

Introduction

The bombshell announcement by OpenAI and Nvidia to unleash at least 10 gigawatts of Nvidia-powered AI infrastructure, bolstered by OpenAI’s Stargate alliance with Oracle and SoftBank, is nothing less than an industrial moonshot disguised as a tech deal. 

This is not a polite step forward in research capacity, it is an audacious land grab for the very foundations of artificial intelligence at planetary scale.  What is being built is not just racks of servers, but sprawling monuments of power lines, cooling towers, financial engineering, and corporate ambition.  It is a promise that tomorrow’s intelligence will belong to those who can monopolize today’s energy and real estate. 

Far from a neutral technical upgrade, this move hardwires control of AI into a handful of corporate giants, and it raises hard questions about energy sovereignty, public accountability, and who will pay the price for this future.  This paper examines the details of the deal, the motives of its architects, the daunting engineering and environmental challenges, the financial and regulatory risks, and what this battle for compute supremacy means for the world – and for Canada.

Three individuals stand together, wearing black jackets. They are positioned in a modern indoor setting with a sleek data center machine behind them.

The Announcement

On 22 September 2025, OpenAI and Nvidia issued a joint letter of intent to build and deploy at least 10 gigawatts of Nvidia systems to support OpenAI’s next generation of AI models, with Nvidia committing up to US$100 billion in staged investments as each gigawatt comes online.  The first gigawatt is slated for deployment in the second half of 2026, leveraging Nvidia’s Vera Rubin platform as a reference architecture. [1] The agreement frames Nvidia not just as a vendor of chips but as a capital partner and system integrator in OpenAI’s compute roadmap.

In parallel, OpenAI, Oracle and SoftBank announced five new U.S. AI data centre sites under their Stargate initiative, bringing the total planned capacity to nearly 7 gigawatts and over US$400 billion of committed investment across the next three years.  The new sites are located in Shackelford County, Texas; Doña Ana County, New Mexico; an unnamed Midwest location; Lordstown, Ohio; and Milam County, Texas. [2] Combined with the existing Abilene, Texas campus and ongoing builds, these additions are presented as putting Stargate ahead of schedule toward a full 10 GW, US$500 billion pledge. [2][3] Oracle’s role is particularly salient: under its deal with OpenAI it would be responsible for delivering up to 4.5 GW of capacity through its cloud and data centre network. [3]

The announcements together tie hardware, system design, and physical deployment into a unified push.  Nvidia’s capital pledge signals long-term alignment with OpenAI’s strategic goals, while Stargate’s geographic footprint provides power sourcing, grid interconnection, real estate, and jurisdictional diversity.  By coordinating these layers, the partners aim to reduce the friction that typically accompanies large infrastructure builds spanning jurisdictions and technical domains.

Diagram illustrating the financial connections among OpenAI, Oracle, and Nvidia, each with a $100 billion investment linked in a triangular format. The title reads 'THE INFINITE MONEY GLITCH.'

Motivations and Strategic Logic

From OpenAI’s perspective, securing large scale compute is central.  The frontier of AI research and deployment demands ever larger models, more aggressive inference throughput, and adaptive hardware pipelines.  Without assured access to multi-GW scale infrastructure, model ambition risks being throttled by supply constraints or vendor lock-in.  By anchoring a multi-gigawatt pipeline, OpenAI establishes a credible growth path and insulates itself from unpredictable external procurement cycles.

For Nvidia, the partnership deepens its integration into the AI supply chain.  It ensures demand visibility far into the future and strengthens dependency of one of the largest AI model producers on its systems and platforms.  The arrangement reinforces Nvidia’s position at the heart of compute for generative AI and mitigates competition risks from alternate architectures or chip suppliers.

Oracle’s participation offers a path to reposition its cloud and infrastructure business closer to the frontier of AI compute.  By providing portions of the Stargate capacity, Oracle can convert capital investment into scaling real assets and capture a share of the value created when those data centres host model training or inference workloads.  For Oracle, the deal offers a chance to accelerate its relevance in a cloud landscape increasingly dominated by AI workloads rather than traditional enterprise applications.

In sum, the alignment of these actors suggests a belief that the future of AI will not be won at the model algorithm boundary alone, but by those who can coordinate compute, energy, real estate, supply, and delivery at scale.

Aerial view of a large construction site with multiple buildings and infrastructure development in a rural area, featuring dirt grounds and scattered greenery.

Engineering, Energy, and Infrastructure Challenges

Deploying 10 GW of compute is not simply a matter of stacking racks of GPUs.  It implicates power systems, cooling, land, grid upgrades, and thermal management.  Each gigawatt facility requires robust transmission connections, often with new substations or line upgrades, redundant power paths, and reliability planning.  Cooling is nontrivial: waste heat must be removed, and water, air or other thermal media must be sourced.  Some campuses may pursue advanced air cooling or waste-heat reuse to reduce water dependency.

The distribution across multiple sites mitigates some risk – different grids, jurisdictions and energy mixes can diversify exposure.  However, it also multiplies permitting burdens, supply-chain diversity, and coordination overhead.  Early phases must inform later ones: lessons in airflow, interconnect delay, power efficiency and chip operation must guide subsequent campuses.  The choice of Nvidia’s Vera Rubin platform for the first gigawatt suggests a design baseline to which later expansions may conform or evolve; this reference platform approach helps limit design divergence and permits economies of scale in system integration.

An important technical nuance is that training and inference workloads are likely to cohabit the same sites, shifting resource allocation dynamically.  Model updates, validation, fine-tuning, and inference traffic may interleave, demanding flexible scheduling, isolation, and resource partitioning at tens of thousands of nodes.  Network topology and memory bandwidth become as critical as raw GPU counts.  Coordinating such complexity across evolving model architectures adds risk to the build-out.

Power consumption models for GPU clusters show that under high utilization, individual nodes may draw on the order of kilowatts; when multiplied across thousands or tens of thousands of nodes, the cumulative demand becomes massive.  Studies of GPU-accelerated nodes suggest that real operational power draw may fall lower than maximum spec, but still remain in kilowatt ranges per node under load. [4] In aggregate, even conservative estimates suggest that the energy footprint of 10 GW of AI compute would rival that of entire cities or industrial sectors, making energy procurement and carbon footprint strategy central to the infrastructure decision.

A modern data center aisle filled with server racks and networking equipment, showcasing organized cables and infrastructure.

Financial, Regulatory, and Risk Dimensions

Nvidia’s US$100 billion commitment is significant not just for its scale but for its structure: it is conditional, phased, and tied to deployment of gigawatt increments, effectively sharing risk across delivery milestones.  This staging allows the parties to reassess demand, cooling, grid readiness, and cost trajectories before committing downstream phases.  OpenAI also retains levers to pace build-out according to real demand signals.

Yet risks are manifold.  Model adoption, application demand, algorithmic efficiency improvements, or alternate compute paradigms could reduce required capacity.  If the demand projections prove too optimistic, lower utilization could undercut the financial model.  Conversely, if demand runs hotter than expected, the partners may struggle to scale fast enough.  Governance is another question.  With Nvidia, OpenAI, Oracle and SoftBank all having intertwined stakes, questions of arm’s-length capacity allocation, preferential access, or circular investment flows may invite regulatory scrutiny around competition, anti-trust or unfair wiring of capital.

Permitting, grid connection approvals, environmental assessments and community buy-in are further potential bottlenecks.  Local opposition, water use regulation, land use conflicts and inter-jurisdictional coordination may slow deployment in some regions.  Financial capital costs, supply chain delays (for power equipment, silicon, networking gear), and construction risk are also significant.  The magnitude of capital exposure – even in phased increments – means that a delay or cost overrun in one campus can ripple into the viability of subsequent ones.

A group of people dressed in vintage baseball uniforms standing in front of tall cornfields, with a quote overlay that reads, 'If you build it, he will come.'

Implications for Global and Canadian AI Infrastructure

Globally, the announcement resets the scale expectations for AI infrastructure.  It asserts 10 GW as not hypothetical but actionable and frames compute deployment as a contest among national or cross-national actors.  Competitors in Asia, Europe or the Middle East may feel pressure to match or exceed this scale, particularly if leading model development concentrates in a few infrastructure blocs.

For Canada, while none of the named sites are domestic, the announcement is highly relevant.  Canada has promising attributes: clean energy (hydro, wind, solar), relatively stable regulatory regimes, and a high concentration of AI talent in cities like Toronto, Montreal, Edmonton, and Vancouver.  This suggests potential appeal to AI infrastructure projects seeking lower carbon footprints and proximity to skilled human capital.  However, realizing this potential would require streamlined permitting, proactive grid expansion, transmission planning, incentives for compute and power co-planning, and frameworks for heat reuse in local communities.

If Canada does not move to attract large scale AI campus investments, it risks being a consumer of compute located elsewhere rather than a producer of infrastructure value.  Furthermore, as compute capacity scales, data sovereignty, cross-border connectivity, and policy frameworks about AI oversight, taxation and industrial strategy will come under scrutiny.  The OpenAI-Nvidia-Oracle plan thus functions not only as a technical model but as a strategic challenge: Canada must choose whether to sit on the sidelines or actively compete in this next wave of infrastructure build-out.

A digital illustration of the Canadian flag overlaid with lines of computer code, symbolizing the intersection of national identity and technology.

Summary

The joint announcement by OpenAI, Nvidia, and Oracle to bulldoze ahead with 10 gigawatts of AI data centre capacity is not just an escalation, it is a declaration of war in the infrastructure race for artificial intelligence.  This is no vague promise wrapped in corporate jargon; it is a blueprint with names, sites, billions of dollars, and the audacity to redraw the global map of technological power. 

The message is clear: the future of AI will not be won in labs or through clever algorithms, but by those who seize control of the pipelines of energy, compute, and geography.  If the build-out succeeds, it will not just bend cost curves, it will crush weaker rivals, entrench monopolies, and shift innovation away from the open commons into fortress-like corporate fiefdoms. 

Yet the dangers are staggering: energy systems strained to the brink, regulatory regimes outpaced by megaprojects, and capital risks that could topple even the giants if demand falters. 

For Canada and other nations, the wake-up call is deafening: either move now to claim a place in this infrastructure land grab, or resign yourselves to permanent dependency on those who already own the future.

Endnotes

1. Nvidia, “OpenAI and NVIDIA Announce Strategic Partnership to Deploy 10 GW of NVIDIA Systems,” September 2025.

2. OpenAI, “OpenAI, Oracle, and SoftBank Expand Stargate with Five New AI Data Center Sites,” September 2025.

3. Reuters, “OpenAI, Oracle, SoftBank Plan Five New AI Data Centers for $500 Billion Stargate Project,” September 2025.

4. Imran Latif et al., “Empirical Measurements of AI Training Power Demand on a GPU-Accelerated Node,” arXiv preprint, December 2024.


About the Author:

Michael Martin is the Vice President of Technology with Metercor Inc., a Smart Meter, IoT, and Smart City systems integrator based in Canada. He has more than 40 years of experience in systems design for applications that use broadband networks, optical fibre, wireless, and digital communications technologies. He is a business and technology consultant. He was a senior executive consultant for 15 years with IBM, where he worked in the GBS Global Center of Competency for Energy and Utilities and the GTS Global Center of Excellence for Energy and Utilities. He is a founding partner and President of MICAN Communications and before that was President of Comlink Systems Limited and Ensat Broadcast Services, Inc., both divisions of Cygnal Technologies Corporation (CYN: TSX).

Martin served on the Board of Directors for TeraGo Inc (TGO: TSX) and on the Board of Directors for Avante Logixx Inc. (XX: TSX.V).  He has served as a Member, SCC ISO-IEC JTC 1/SC-41 – Internet of Things and related technologies, ISO – International Organization for Standardization, and as a member of the NIST SP 500-325 Fog Computing Conceptual Model, National Institute of Standards and Technology. He served on the Board of Governors of the University of Ontario Institute of Technology (UOIT) [now Ontario Tech University] and on the Board of Advisers of five different Colleges in Ontario – Centennial College, Humber College, George Brown College, Durham College, Ryerson Polytechnic University [now Toronto Metropolitan University].  For 16 years he served on the Board of the Society of Motion Picture and Television Engineers (SMPTE), Toronto Section. 

He holds three master’s degrees, in business (MBA), communication (MA), and education (MEd). As well, he has three undergraduate diplomas and seven certifications in business, computer programming, internetworking, project management, media, photography, and communication technology. He has completed over 60 next generation MOOC (Massive Open Online Courses) continuous education in a wide variety of topics, including: Economics, Python Programming, Internet of Things, Cloud, Artificial Intelligence and Cognitive systems, Blockchain, Agile, Big Data, Design Thinking, Security, Indigenous Canada awareness, and more.