Executive Summary
The next wave of AI infrastructure won't all live in mega-scale hyperscaler campuses. Edge computing — distributing AI inference and processing to geographically dispersed, 1–20 MW facilities — is becoming strategically essential for enterprises seeking sub-10ms latency, data sovereignty, and operational efficiency.
This guide analyzes whether enterprises will self-build their own edge data centers or rent from established providers. The answer: a hybrid landscape is emerging. Large tech firms and Fortune 500 companies with stable, predictable AI workloads are exploring corporate-owned facilities; most others are subscribing to edge infrastructure from hyperscalers (AWS Local Zones, Google Distributed Cloud), specialized edge providers (Cloudflare, Akamai), and telecom operators (Verizon MEC, T-Mobile Edge).
For powered-land investors, this creates dual opportunities: (1) partnership or acquisition plays with edge-focused operators; (2) site identification and build-to-suit arrangements with enterprises planning corporate self-build programs.
Market Landscape & Recent Trends
AI is accelerating a broader shift from centralized computing toward distributed architectures. Gartner forecasts that through 2027, 50% of critical enterprise applications will reside outside centralized public cloud locations — a direct signal that hybrid/edge footprints are becoming structural, not experimental.
On the supply side, AI-driven demand is colliding with development constraints. CBRE reports a record 6,350 MW under construction in primary North American markets at end of 2024, with project timelines extended by power constraints and supply chain delays. JLL reports extremely tight vacancy (approximately 1% in its year-end North America view), consistent with undersupplied capacity and strong pricing power for "ready" sites.
Energy and grid implications are now central to strategy. The IEA estimates data centers used approximately 415 TWh globally in 2024 (~1.5% of world electricity) and projects consumption to reach approximately 945 TWh by 2030. For the U.S., estimates peg 183 TWh in 2024, projected to 426 TWh by 2030. The infrastructure substrate is tightening — minimal vacancy, aggressive construction pipelines, and energy constraints are driving demand for geographically distributed, modular edge facilities.
Deployment Models
The market is converging on three deployment models. The key investor question is which models create durable demand for third-party powered land. Each model has distinct real estate and capital implications:
Key insight: The "build vs. rent" decision is primarily driven by workload stability, scale, and data sovereignty requirements — not geography.
Drivers & Barriers for Corporate Self-Build
Edge compute exists to deliver low latency and real-time performance. Retail and industrial operations benefit from real-time inference (computer vision, robotics, safety systems) where cloud roundtrips are operationally expensive. Public cloud providers are productizing sovereignty needs directly into edge-like offerings — AWS introduced Dedicated Local Zones (2023) as managed infrastructure placed at customer-specified locations. But several barriers keep most corporations renting rather than building.
+ Drivers Pushing Self-Build
AI inference for autonomous vehicles, robotics, and real-time processing requires <10ms round-trip. Centralized processing can't compete.
EU, China, and increasingly US regulation requires data residency. Edge keeps sensitive data processing local.
Egress bandwidth costs from hyperscalers ($0.15–$0.35/GB) make local processing economical for high-throughput workloads.
Proprietary models and algorithms stay on-premise; reduces attack surface and IP leakage risk.
− Barriers That Keep Most Renting
$9.3M–$15M per MW is only feasible for companies with consistent, multi-year workload forecasts.
Local grid capacity, utility interconnection delays (6–18 months), and zoning restrictions remain critical bottlenecks.
Switchgear often has ~46–48 week lead times, generators/chillers 30+ weeks. GPU/AI accelerator shortages add execution risk.
AI chip densities (10+ kW per rack) require advanced liquid cooling. Only ~22% of operators report any direct liquid cooling usage (2024).
Case Studies (2023–2026)
Recent moves by hyperscalers, CDNs, and telcos show that the infrastructure for edge AI is being built aggressively — but overwhelmingly by providers, not enterprises. Walmart is the notable exception among corporations, demonstrating the self-build path for companies with massive distributed footprints.
AWS Local Zones & Dedicated Local Zones
AWS deployed 30+ Local Zones in metro areas. Dedicated Local Zones (Aug 2023) are positioned for sovereignty/regulatory needs with customer-specified placement — reducing incentives to self-build.
Cloudflare Workers AI
Launched Sep 2023 with GPU rollout to 100+ sites. Global edge network of small-footprint inference nodes, each 500 kW–2 MW. Fastest adoption among SMB/mid-market developers.
Akamai Inference Cloud
Oct 2025 launch; Mar 2026 announcement to deploy thousands of NVIDIA Blackwell GPUs. 4,000+ global PoPs repurposed for managed inference-as-a-service.
Verizon MEC + AWS Fiber
Verizon + NVIDIA (Dec 2024) positions private MEC for real-time AI. Verizon + AWS AI Connect fiber routes (Nov 2025). Telecom + cloud convergence provides edge without enterprises owning facilities.
Walmart Retail Edge
Walmart deploys compute closer to stores/DCs for latency and performance. Its Element ML platform emphasizes reduced dependency on vendors and cost savings. Large enterprises with distributed footprints will run micro-edge fleets inside existing facilities.
Site Specifications & Cost Benchmarks
U.S. construction cost per MW varies widely by facility size and market. Cushman & Wakefield reports critical load development cost ranges from about $9.3M to $15M per MW across 19 U.S. markets (average ~$11.7M/MW). Smaller builds are more expensive per MW: ~$13M/MW for 1–5 MW vs. ~$11.7M/MW for 5–20 MW. Here's how the major archetypes compare:
Construction costs have risen steadily — from ~$7.7M/MW in 2020 to ~$10.7M/MW in 2025, with JLL forecasting ~$11.3M/MW in 2026. Data center land averages $5.59/sf (~$244k/acre). For a 10 MW IT load, annual energy costs run roughly $8.2M–$16.4M depending on local utility rates.
Source: Cushman & Wakefield Data Center Development Cost Guide 2025. Costs include site, building, power, cooling, and IT hardware. Regional variations ±15%.
Enterprise Decision Framework
The decision logic for enterprises follows a clear flow. The first question is whether the workload is latency-critical (sub-10ms). If not, central cloud regions work fine. If latency matters, the next filter is data sovereignty — and then whether the enterprise has stable, multi-year utilization that justifies the capex of self-build.
Adoption Scenarios (2026–2030)
The key distinction is between "edge AI happens" and "enterprises themselves build MW-scale edge data centers." Using IEA/Pew estimates, U.S. data center electricity consumption rises from 183 TWh (2024) to 426 TWh (2030), an increase of 243 TWh (~27.7 GW average additional facility load). Even in the aggressive scenario, corporate self-build remains 12–15% of total edge capacity:
Even in the aggressive scenario, provider-built and rent-to-consume dominate due to lower capex and faster deployment timelines.
2024–2026 Milestone Timeline
The pace of edge AI infrastructure buildout has accelerated sharply. What was experimental in 2023 has become operational strategy by 2025, with first-mover corporate self-builds targeting 2026 delivery:
Implications for Powered-Land Investors
Power-first siting is now explicit in market research. Power availability is pressing in established markets, pushing interest into emerging/tertiary markets. This directly favors investors who can assemble parcels with credible interconnection pathways, transmission/substation proximity, and permitting readiness. Demand for 1–20 MW sites is supported by provider-built edge expansion, hyperscaler metro footprints, and telco MEC evolution.
Land Acquisition Strategy
Target 5–20 acre parcels in Tier-2/Tier-3 metros with fiber-ready infrastructure, grid capacity for 5–20 MW, and 20–30 year hold economics. Corporate self-build will remain niche (10–15%), but deals are large, multi-year, and sticky.
Operator Partnerships
Partner with regional data center operators who can deploy 1–5 MW facilities on your land. Colocation operators will remain dominant (40–50% of edge market). Revenue upside: triple-net leases + revenue share on edge services.
Power & Connectivity
Secure long-term PPAs (15–25 yr) with utilities for 10–30 MW. Diversify fiber connectivity: metro carriers, regional CLEC, wireless backup. These infrastructure assets command premium triple-net rents.
Risk Factors
Tenant concentration risk with single-enterprise facilities. Power delivery risk is primary: constrained substations can strand land. Tech capex cycles are volatile. Mitigate via multi-tenant master leases and revenue escrow.
Risk Comparison: Self-Build vs. Provider-Built
Self-Build Risks
- Execution risk: 18–24 month buildout; permitting delays
- Capex concentration: $93M–$300M for multi-facility program
- Stranded assets if enterprise pivots to cloud
- Operational complexity: 24/7 staffing, compliance
- GPU/cooling equipment supply constraints
Provider-Built Upside
- Faster time-to-market: 12–15 months to revenue
- Lower capex per MW; distributed risk across portfolios
- Revenue diversification: multi-tenant + hyperscaler
- Operational leverage: centralized ops, proven SLAs
- Exit optionality: saleable to REIT, PE, strategic buyer
Key Takeaways
For Powered-Land Investors
- Edge data centers are a material real estate opportunity — 45–120 GW by 2030 across all models.
- Corporate self-build remains 10–15% of edge market, but deals are large ($93M–$300M), long-term, and capital-intensive.
- Provider-built facilities dominate; partner with colocation operators and hyperscalers for distributed edge networks.
- Secure fiber diversity and long-term PPAs — these become competitive moats commanding premium rents.
Recommended Next Steps
- Identify Tier-2/Tier-3 metros with fiber infrastructure and 20+ MW grid capacity.
- Map utility power availability; confirm interconnection feasibility and timeline.
- Engage colo operators and telcos; explore build-to-suit and revenue-share partnerships.
- Monitor Fortune 500 RFP activity; target AI/ML-driven enterprises planning self-build programs.
Sources: IEA Energy and AI Report (2024), CBRE North America Data Center Trends H2 2024, Cushman & Wakefield Data Center Development Cost Guide 2025, JLL 2026 Global Data Center Outlook, Gartner Edge Computing Forecast, U.S. Department of Energy.