β οΈ AI ROI Extension
Will the $1-3T AI Capex Wave Earn Its Cost of Capital?
Wall Street's Burning Question
Hyperscalers are investing $2.8 trillion in AI infrastructure through 2029. Goldman Sachs warns of "too much spend, too little benefit." The IMF and Bank of England raise bubble concerns comparable to the 2000 dot-com crash.
This framework provides the first LP-based system to measure AI capex ROI from public filings, turning trillion-dollar narratives into testable, auditable economics.
Key Metrics
- $2.8T projected AI infrastructure investment through 2029 (Citigroup)
- 55 GW of new power capacity required by 2030
- 10-15% typical GPU utilization (85%+ sits idle)
- 97% of enterprises struggle to demonstrate GenAI business value
- 3-5 years typical infrastructure payback (J-curve effect)
- $600B revenue gap between current AI revenue and infrastructure requirements (Sequoia Capital)
What This Framework Measures
Theory Documentation
Example: Microsoft FY2024 Analysis
| Metric | Value | Status |
|---|---|---|
| AI Invested Capital \((IC^{AI})\) | $111.0B (ending), $98.0B (average) | DISCLOSED |
| \(\Delta\) NOPAT (AI-attributed) | $5.3B β $8.9B | ESTIMATED |
| \(ROIC^{AI}\) | 5.4% β 9.1% | AMBIGUOUS |
| WACC | 8.5% | CALCULATED |
| Implied EV/EBITDA \((g=6.5\%)\) | 11.7Γ | CONSISTENT |
| Analyst Multiple | 15.0Γ | TOO HIGH (+3.3Γ) |
| PPA Capacity | 0.8 GW | CONSTRAINED (104Γ gap) |
| Implied Power Demand | 83.3 GW | INFEASIBLE |
Key Finding: \(ROIC^{AI}\) is ambiguous (5.4-9.1% straddles 8.5% WACC) in FY2024. Lower bound below hurdle but upper bound slightly above; J-curve effect suggests full returns by 2027-2028 if distributed lags play out. Critical constraint: power capacity 104Γ short of implied demand (0.8 GW disclosed vs. 83.3 GW required)βmust expand PPAs or cap growth forecasts.
Key Innovations
vs. Traditional DCF
- Conservation-Consistent Terminal Multiples: EV/EBITDA must respect \(WACC > g\) and \(g \leq ROIC\) (physics constraints)
- Distributed Lag Model: Explicit 3-5 year payback timeline vs. assuming immediate returns
- Power Feasibility Check: Growth forecasts capped by disclosed PPA capacity (55 GW gap by 2030)
- Utilization Adjustment: Effective ROIC at 15% utilization is 6.7Γ lower than nominal
- Quality Haircut: \(Q_t\) penalizes aggressive AI claims from companies with sloppy accounting
- Multi-Method NOPAT Attribution: Report bounds across trend break, bottom-up, and cost savings approaches
Wall Street Debate
Bears (IMF, BoE, Goldman Sachs)
- Valuations "comparable to 2000 dot-com peak" (BoE Financial Stability Report, October 2024)
- "$1T spend with little to show for it" (Goldman Sachs, June 2024)
- Daron Acemoglu (MIT): Only 0.5% productivity gain over 10 years; 4.6% of tasks exposed to AI
- Sequoia Capital: $600B revenue gapβcurrent AI revenue ~$100B vs. $600B needed to justify infrastructure
- Circular financing concerns (Nvidia invests in OpenAI β OpenAI buys Nvidia GPUs)
Bulls (Morgan Stanley, Tech Companies)
- Profitability starts 2025 (34% margin projected, vs. dot-com unprofitability)
- Azure AI $13B run rate (+175% YoY), AWS fastest growth since 2022, Meta 22% ad efficiency gains
- Natural 5-10 year time lag between infrastructure and application revenue (distributed lag defense)
- Unlike dot-com: Companies are profitable and cash-generative ($100B+ free cash flow annually)
- Joseph Briggs (Goldman): 15% labor productivity increase possible if adoption scales
Framework Assessment
- Early-stage \(ROIC^{AI}\) (7-10%) is sub-hurdle for many but J-curve effect justifies patience
- Power constraints are binding (3-30Γ gaps between disclosed PPAs and implied demand)
- Utilization mystery is critical (industry 10-15% vs. hyperscaler "capacity constrained" claims)
- Valuations moderately stretched (28Γ P/E vs. 15-17Γ historical) but not extreme (dot-com was 200Γ)
- Systemic risk material due to concentration (Mag-7 is 30% of S&P 500) but fundamentals stronger than 2000
Python Implementation
Module Overview
- ai_ledger.py: Build \(IC^{AI}\) from PP&E roll-forwards and footnote disclosures
- incremental_roic.py: Calculate \(ROIC^{AI}\) with three attribution methods, report bounds
- lag_model.py: ARDL distributed lag model (3-5 year infrastructure payback)
- terminal_multiple.py: Validate conservation-consistent EV/EBITDA multiples
- power_analyzer.py: Check power constraints, estimate GPU utilization, extract PPAs
- quality_adjust.py: Apply \(Q_t\) haircuts to AI cash flows for noisy reporters
- cost_parity.py: Test Nvidia's "90% cost savings" claims with shadow P&L
- bubble_risk.py: Systemic risk scoring (concentration, valuation, circular financing)
Status: Python implementation delegated to Codex via CODEX_TASKS/TASK_12_AI_ROI_IMPLEMENTATION.md. Target: 8 modules (~1,200 LOC), 85%+ test coverage, oracle fixtures for MSFT/META/GOOGL/AMZN.
Data Sources
| Data Type | Source | Quality |
|---|---|---|
| Total CapEx, Segment Revenue, Operating Income | 10-K/10-Q (audited) | HIGH |
| PP&E Roll-Forwards, PPA Commitments | Footnotes (audited) | HIGH |
| AI Business Metrics ($13B Azure AI run rate) | Earnings Calls, Press Releases | MODERATE |
| AI-Attributed NOPAT | Estimated (3 methods) | LOW (Β±30-40%) |
| GPU Utilization Rates | Industry Benchmarks | LOW (Β±50%) |