top of page

AI, Power and the New Digital Infrastructure Map

Or: Why your AI strategy is meaningless without a power strategy


We talk a lot about AI models, copilots and "agents".

Nice. But all of this lives in buildings that eat megawatts, drink water, and sit on top of cables, fibre and very real land.

Over the next few years, global data centre power demand is expected to grow 2-3x, driven mainly by AI workloads. The International Energy Agency (IEA) estimates data centres could consume 1,000 TWh annually by 2026—roughly equivalent to Japan's total electricity consumption.

At the same time, grids are under pressure, regulators are nervous, and everyone suddenly pretends to be an energy expert.

So the real question is simple:

Are you building an AI strategy—or quietly funding someone else's infrastructure strategy?

If you don't understand the plumbing of AI—power, cables, chips, data centres—you're not in the game. You're just a sophisticated customer with a nice slide deck.


Eye-level view of a serene landscape with a winding path
Sustainable AI infrastructure and energy solutions.

1. The Moving Parts: What "AI Infrastructure" Really Is


When someone says "AI infra", half the room thinks GPUs, the other half thinks "more cloud". Both are incomplete.

The stack has four big pieces that most AI strategists ignore:


1.1 Data Centres – Where AI Actually Lives

AI doesn't live in "the cloud". It lives in buildings that hum, heat up and need serious cooling. These sites are very physical, very expensive and very dependent on where you put them.


Terrestrial Data Centres

Three main types:


Hyperscale — The big GPU barns from the usual suspects (AWS, Google, Microsoft, Meta). Purpose-built for massive AI training runs. Goldman Sachs estimates hyperscalers will invest $1 trillion in data centre infrastructure by 2030.


Colocation — Neutral "hotels" where many customers rent racks. Companies like Equinix, Digital Realty, and NTT operate these. Growing rapidly as enterprises want AI capacity without building their own facilities.


Edge — Smaller sites close to users, factories, cities. Critical for low-latency AI inference. Think autonomous vehicles, industrial automation, real-time analytics.


Two Very Different Workloads:

  • Training — Long, heavy, power-hungry. A single large language model training run can consume 1-5 MW continuously for weeks or months. GPT-3's training is estimated to have used around 1,287 MWh of electricity.

  • Inference — Shorter, more distributed, latency-sensitive. This is where the trained model actually serves users. ChatGPT handles millions of inference requests daily, each requiring much less power than training—but adding up fast.


Orbital / Space-Based Data Centres

Still early, but no longer science fiction:

  • China's "Space Cloud" Vision — China Aerospace Science and Technology Corporation announced plans for solar-powered data centres in orbit by 2030. The concept: continuous solar power, no atmospheric cooling issues, and potentially strategic advantages.

  • Why it matters — Solar-powered compute in orbit, above the atmosphere. Likely for strategic, sensitive or ultra-high-availability workloads. Think special forces of compute, not the main army (yet).

Every AI promise ends up in a rack somewhere. If you don't know where, you don't know your risk.


1.2 Connectivity – The Nervous System

You can build the best data centre on earth (or above it), but if bits can't move fast and reliably, it's just an expensive heater.

Connectivity is the nervous system: it decides latency, resilience, and dependency.


Subsea Cables

  • Carry 99% of international data traffic (not satellites, despite what Elon wants you to believe)

  • Define the "gravity points" for landing stations and interconnect hubs

  • Increasingly financed and controlled by hyperscalers, not telcos — Google, Meta, Microsoft and Amazon now own or co-own over 30 subsea cables globally

  • The SEA-ME-WE 6 cable (Southeast Asia-Middle East-Western Europe) will be 19,200 km long and ready by 2025, connecting 10 countries


Fibre Backbone

  • High-capacity routes inside and between countries

  • Long-life assets (20-30 years if done right)

  • The enabler for low-latency AI between cities and regions

  • Dark fibre is increasingly valuable—many telcos are sitting on underutilized assets that could be goldmines


5G / 6G / Fixed Access

  • How humans, machines and sensors actually touch your AI

  • Drives where you need edge computing and local inference

  • Coverage, quality and spectrum choices shape who can use what

  • 6G is expected around 2030 with 100x faster speeds than 5G—critical for real-time AI applications

You can't talk "AI sovereignty" if your models live at the end of someone else's cable.


1.3 Compute & Chips – Brains, Heat and Tokens

Everyone loves to talk GPUs and benchmarks. Fine. But at scale, chips are mostly two things: brains and heaters.

The real game is not the press release about your latest accelerator. It's how many useful tokens you can push per kilowatt-hour—and whether anyone in your company can actually use that capacity.


Types of Compute

  • GPUs (NVIDIA dominates with 80%+ market share in AI training)

  • TPUs (Google's Tensor Processing Units, optimized for TensorFlow)

  • Custom accelerators (AWS Inferentia/Trainium, Microsoft's Maia)

  • ASICs (Application-Specific Integrated Circuits for specialized tasks)

  • Different mixes for training vs inference


Efficiency vs Demand

  • Each chip generation improves performance per watt by 2-3x

  • But total AI demand is growing 10x+ in the same period

  • NVIDIA's H100 GPUs draw 700W each; a rack of 8 can pull 5.6 kW just for the chips, plus cooling

  • Without a power strategy, "more GPUs" just means "more heat"


The Practical Questions

  • How much useful work do we get per kWh?

  • Are we locked into one vendor / one cloud / one architecture?

  • Is this capex a tool—or just a status symbol?


Reality check: Most companies can't even get the GPUs they want. NVIDIA's H100s have had 6-12 month wait times. If you're not a hyperscaler or haven't locked in supply, you're in the slow lane.


1.4 Energy & Grid – The Real Boss in the Room

This is the part most AI strategies skip in the glossy deck.

Your grand AI plan is meaningless if:

  • The substation can't support another megawatt, or

  • Your "green story" is built on gas and hope

Energy and the grid decide where you can build, how fast you can grow, and whether your AI story is credible—or just marketing.

Power Sources


  • Grid mix — Coal, gas, renewables, nuclear. In 2024, US data centres still got ~60% of power from fossil fuels despite corporate green pledges.

  • PPAs (Power Purchase Agreements) — Long-term contracts for renewable energy. Meta, Google, and Amazon are among the world's largest corporate buyers of renewable energy.

  • On-site generation — Solar, batteries, and increasingly SMRs (Small Modular Reactors). Microsoft signed a deal to restart Three Mile Island's reactor for AI workloads.

  • Cost, stability and carbon footprint all matter


Grid Constraints

  • Substation capacity and connection lead times — In parts of Virginia (US data centre hub), new connections can take 3-5 years

  • Transmission bottlenecks between regions

  • Local politics — Water (data centres can use millions of gallons daily for cooling), land, noise, permits

  • Ireland has considered moratoriums on new data centres due to grid strain


Circularity & Reuse

  • Feeding waste heat into district heating or industry (common in Nordics)

  • Using AI loads as flexible demand to support grid stability

  • Moving workloads in time and space to follow cheap, clean power (Google does this)

If the grid says "no", your AI roadmap becomes a wish list.


2. The Trends: Where the Curve Is Really Going

Now zoom out.


The Numbers:

  • AI workloads are exploding — Training GPT-4 is estimated to have cost OpenAI over $100 million, largely in compute

  • Data centre energy use — Expected to reach 3-4% of global electricity demand by 2030 (IEA), up from ~1% in 2020

  • Chips are getting more efficient, but demand is growing faster — NVIDIA's revenue grew 265% year-over-year in 2023, driven almost entirely by AI chips

  • Regulators are watching — EU's Energy Efficiency Directive now includes data centres; Singapore paused new data centre approvals in 2019-2022 due to power constraints


The Shift: From Experiment to Critical Infrastructure

Banks, health systems, factories, logistics, national security—everyone is wiring AI into their core systems.

Which means when the DC dies, it's not "just IT". It's operations.

The Money Is Moving:

  • Hyperscalers are now some of the biggest energy buyers in the world

  • Telco capex is increasingly overshadowed — Global telco capex is ~$400B annually; hyperscaler capex hit ~$200B in 2023 and growing faster

  • Countries are talking AI sovereignty the way they once talked about oil and nuclear

So this is not "just tech". This is geopolitics, energy policy, and industrial policy—all in one.


3. Regional Playbooks: No Copy-Paste Strategy

Different regions have different roles to play. Trying to copy-paste someone else's model is the fastest way to waste capex.


3.1 Nordics – Green Baseload for AI

The Nordics accidentally became the "Switzerland" of data centres:

Why they win:

  • Abundant renewables — Norway gets 98% of electricity from hydro; Sweden ~75% from hydro and nuclear

  • Cold climate = free cooling — Average temperatures in Stockholm: 2-18°C year-round

  • Strong district heating systems that can reuse DC waste heat

  • Political stability and strong rule of law

Perfect for:

  • Long-running AI training jobs

  • High-density GPU clusters that need cooling and a good carbon story

  • "Baseload AI" that investors and regulators feel good about

Who's there: Meta (Luleå, Sweden), Google (Hamina, Finland), Microsoft (multiple Nordic sites)

If I were a hyperscaler, this is where I park my heavy, boring, always-on workloads.


3.2 South Med & North Africa – Cable Edge and Interconnect

Look at the map around Egypt, Morocco, the Red Sea, the Mediterranean:


Why it matters:

  • Subsea cable chokepoint — 17+ major cables pass through or land in this region, connecting Europe, Asia, and Africa

  • New AI & cloud data centres announced around landing stations (Egypt, Morocco, Saudi Arabia)

  • Solar and wind potential is massive — Morocco has some of the world's best solar resources

  • Growing population and digital economy — Africa's internet users growing 10%+ annually


Perfect for:

  • Regional AI inference (not just training) close to users

  • Sovereign / regional clouds for Africa & MENA

  • Low-latency interconnect between three continents

If the Nordics are the "AI baseload plant", North Africa and the South Med can be the AI edge and interconnect hub.


The open question: Do you become owners of that role—or just landlords to foreign clouds?


3.3 US & Western Europe – Demand Centres and Political Battlegrounds

Here sits most of the demand, the money and the noise:


The Reality:

  • Huge AI consumption from enterprises, startups, governments

  • Local resistance around new DCs — Virginia residents fighting noise and water use; Ireland limiting new builds

  • Political battles over chips (CHIPS Act in US), export controls (US-China tensions), and "who owns the models"

The playbook:

  • Squeezing more value out of existing infrastructure

  • Locating DCs where power + permits + fibre actually align

  • Balancing "we want AI" with "not in my backyard"

Examples:

  • Northern Virginia — Largest data centre market globally, but grid strain pushing new builds to surrounding states

  • Frankfurt — Europe's data hub, but energy costs and regulations creating challenges

  • Paris — Pushing for "sovereign cloud" to reduce dependence on US hyperscalers


3.4 Asia Hubs – High-Value, High-Constraint Nodes

Places like Singapore, Hong Kong, UAE:

Characteristics:

  • Great connectivity — Singapore has 30+ submarine cables landing

  • Strong financial hubs and strategic locations

  • Very limited land and tight energy/water constraints — Singapore paused new DC approvals 2019-2022

  • Heavy regulation and strong government steering

These become premium interconnect and control points rather than bulk capacity locations.

Think: High-value, low-latency financial trading, regional AI inference hubs, not massive training farms.


4. China's Play: Nukes, Space and AI Sovereignty

While the West argues on panels, China is playing a very different game:


The Facts:


  • Building new nuclear reactors — China has 55 operational nuclear reactors (3rd globally) and 23 under construction (most in the world). New reactors partly justified by AI and data centre demand.

  • Space-based data centres — China Aerospace Science and Technology Corporation announced plans for solar-powered orbital data centres by 2030

  • Treating AI + Energy + Infra as one policy — Not three separate ministries writing white papers


You don't have to like it. But you should respect the coherence:

They're answering the question "where will our AI live and who powers it?" with actual steel and concrete.


Meanwhile, many others are still answering that question with… white papers.


5. What the Big Players Are Doing (Instead of Just Talking)

The big cloud and infra players are not waiting for permission:


Energy First:

  • Google — Committed to running on 24/7 carbon-free energy by 2030; already largest corporate buyer of renewable PPAs

  • Microsoft — Restarting Three Mile Island reactor (880 MW) for 20 years to power AI; investing in fusion research

  • Amazon — Over 20 GW of renewable energy capacity contracted


Cables Second:

  • Meta — Co-owns or owns stakes in 20+ subsea cables

  • Google — Owns or co-owns 30+ subsea cables globally

  • Microsoft & Amazon — Heavy investments in subsea infrastructure


Data Centres Third:

  • Re-designing for high-density AI racks — Going from 5-10 kW per rack to 30-50+ kW per rack

  • Heat reuse — Meta's Odense DC (Denmark) provides heat to 100,000 residents

  • Picking very specific regions for very specific roles


The Pattern Is Clear:

  1. Energy first

  2. Cables second

  3. Data centres third

  4. Models and features on top

Everyone loves to talk about the top layer. The smart money is quietly locking in the bottom three.


6. So What? What This Means for Leaders

Let's bring it back to earth.


6.1 If You're a Telco

You are either:

  • Part of the AI plumbing, or

  • Watching others build it on top of your ducts and fibres


Simple Rules:

Anchor capex in "no-regret" assets:

  • Deep fibre (especially dark fibre that can be lit up later)

  • Spectrum (especially mid-band 5G)

  • Key edge locations (near industrial clusters, cities, universities)

Don't build speculative GPU bunkers "because AI". Tie each phase to:

  • Real power deals (locked in for 5-10+ years), and

  • Real anchor tenants or workloads (not "if we build it, they will come")

Separate infra from "AI product experiments" in your P&L. Otherwise the board will kill the wrong thing when budgets tighten.

The opportunity: Your existing fibre, towers, and real estate could be worth more as AI infrastructure than as traditional telco assets. But only if you act now.


6.2 If You're an Infra / Data Centre Provider


Decide your archetype:

  • Green baseload? (Think Nordics)

  • Cable-edge interconnect? (Think North Africa, South Med)

  • Metro edge? (Think last-mile AI inference)

You can't be all three.


Then:

Design for heat reuse where it actually makes sense

  • District heating (works in cold climates with existing infrastructure)

  • Industrial process heat (works near manufacturing)

  • Don't force it where the economics don't work

Lock in long-term power early — Not after the building is up. Get PPAs, grid connections, and backup generators sorted before breaking ground.


Be honest about what workloads you're optimized for:

  • Training (needs massive power, less latency-sensitive)

  • Inference (needs lower latency, more distributed)

  • General IT (traditional enterprise workloads)


Reality check: If you can't clearly answer "What's our power strategy for the next 10 years?", you don't have a data centre strategy.


6.3 If You're an Investor / Family Office / PE


Stop only chasing shiny AI apps.


Ask three questions in every AI infra pitch:

1. Where is the power coming from for the next 10-20 years?

  • What's the grid mix?

  • Are there locked-in PPAs?

  • What are the backup plans when demand spikes?

2. What cables and connectivity does this location really have?

  • How many subsea cables land nearby?

  • What's the fibre backbone capacity?

  • Who owns the cables? (If it's all foreign-owned, that's a dependency risk)

3. What role in the value chain does this asset play?

  • Baseload (training farms in Nordics)?

  • Edge (inference close to users)?

  • Interconnect (cable landing stations)?

  • Or nothing special? (Just another generic DC)


If the team can't answer clearly, it's not an AI infra strategy. It's a hope trade with GPUs.


Also: don't ignore "unfashionable" regions.

Cable hubs in Africa, the South Med, or secondary European cities might be the real asymmetric bets—if power and regulation line up.


Example: Morocco and Egypt has excellent solar, is building out fibre, and sits at the junction of three continents. But most investors are still only looking at Frankfurt and Virginia.


7. Closing: Choose Your Role, or It Will Be Chosen for You


In the AI era, your real strategy is not which model you fine-tune.

Your real strategy is:

  • Where you put your megawatts

  • Which cables you hang off

  • Which regions you bet on for training and which for inference


Everything else is a feature.

If you don't choose your role in this new map, you will wake up one day as just another tenant in someone else's AI empire—paying rent, following rules, and calling it "innovation".


Three Final Thoughts:


1. The window is shorter than you think

Grid connections take 3-5 years. Subsea cables take 2-3 years to lay. PPAs take months to negotiate. If you're starting this conversation today, you're already late for 2027.


2. Geography is back

For 20 years, we told ourselves "cloud means location doesn't matter." That was always partly a lie, but now it's obviously wrong. Where your compute sits determines your power costs, your latency, your regulatory exposure, and your strategic dependencies.


3. Infrastructure is the new moat

In the last era, data was the moat. In this era, infrastructure is the moat. The companies and countries that own the full stack—power, cables, data centres, chips—will write the rules. Everyone else will play by them.


Your move.

This is part of my ongoing series on technology, power, and what it takes to build a future that works. If this resonated, check out my other posts on AI Sovereignty, Why Corporates Are Buying AI All Wrong, and Are You Learning to Farm?

 
 
 

Comments


IMG_5510.jpeg

Hi,
I'm Amir

Leaders must be both human and tough.

My style is direct, fair and transparent. People follow leaders who tell them the truth, protect them from nonsense, and still demand their best work.

Post Archive 

Tags

How I work

My operating system is simple:

  • EPIC culture: Efficient, Precise, Intelligent, Credible

  • Root cause first: We diagnose before we prescribe

  • Little big wins: Fast, visible progress that builds trust and momentum

  • No theatre: Clear language, direct conversations, honest statusIf this resonates with how you like to work, we’ll get along well.

© 2025 by Amir Abdelazim.
Powered and secured by Qualiteq labs

Amir Abdelazim

Innovatics Partners GmbH & Co. KG

71 Urbanstr.

Berlin, 10967

Germany

  • LinkedIn
  • Instagram
  • X
  • TikTok
  • Youtube
bottom of page