Stock Markets March 11, 2026

Barclays Says Hyperscaler AI Hardware Spending Could Be Hundreds of Billions Higher Than Forecast Through 2028

Bank’s model, anchored to OpenAI and Anthropic disclosures, projects a capex cycle extending into 2028 and a potential $225 billion-plus shortfall versus consensus for 2027-28

By Jordan Park NVDA MSFT
Barclays Says Hyperscaler AI Hardware Spending Could Be Hundreds of Billions Higher Than Forecast Through 2028
NVDA MSFT

Barclays' proprietary analysis of financial disclosures linked to OpenAI and Anthropic suggests that hyperscale cloud operators may need to deploy far more capital into AI compute infrastructure than current Street forecasts assume. The bank estimates consensus could understate hyperscaler capital expenditure in calendar years 2027 and 2028 by in excess of $225 billion, a gap that would likely benefit AI semiconductor suppliers and alter market expectations about the timing of the capex peak.

Key Points

  • Barclays' model, anchored to financial disclosures tied to OpenAI and Anthropic, implies hyperscaler chip spending will be substantially higher than consensus.
  • The bank estimates a potential shortfall versus Street forecasts of more than $225 billion for hyperscaler capex in 2027 and 2028 combined.
  • OpenAI and Anthropic are estimated to represent about two-thirds of current compute demand, but other labs and sovereign programs could expand their share and raise infrastructure needs.

Barclays has produced a model that ties disclosed financial information from major AI labs to implied chip spending by hyperscale technology companies, arguing this approach provides a more direct read on infrastructure investment than methods based on token consumption or query volumes. The bank's conclusions point to a larger and longer AI infrastructure investment cycle than many market participants currently expect.

In a note outlining the framework, analysts led by Tom OMalley wrote that their work indicates "the capex up-cycle lasts into at least 2028 and could also be magnitudes larger vs. consensus ($225B+ in CY27/CY28)." The projection implies that capital expenditure by hyperscalers in 2027 and 2028 could exceed current Street forecasts by more than $225 billion, a divergence Barclays characterizes as a "material positive" for companies that make AI semiconductors and related infrastructure components.

The bank highlights a disconnect between market pricing and its scenario for the timing of the spending peak. Barclays notes that Nvidia (NASDAQ:NVDA) appears to be valued by the market as if hyperscaler capex will plateau in 2027, while the banks framework projects spending will continue to rise into 2028.

Central to Barclays' timeline is an expectation that AI research labs will reach a phase of recursive self-improvement that materially raises compute efficiency. Under the banks assumptions, hyperscale capital expenditure would peak in 2028 because infrastructure must be deployed before the subsequent ramp in training workloads. Barclays further assumes that operating expenses for training would reach their maximum in 2029, implying the capital deployment peak precedes the training cost peak by approximately one year.

The analysts acknowledge their baseline projection may still be conservative. Their model assumes that existing training-class chips will be adequate to handle the majority of inference workloads beginning in 2027. If that assumption proves incorrect, additional procurement of inference-specific silicon would raise future spending beyond the current Barclays estimates.

Barclays also anticipates that the concentration of compute demand will likely broaden over time. Today, OpenAI and Anthropic are estimated to account for roughly two-thirds of global AI compute demand, according to the banks calculations. However, other AI labs and platforms - including entities referenced as Gemini, Grok and various sovereign AI programs - could gradually capture a larger share of compute needs, which would further expand the market for infrastructure investment.

By deriving implied chip spend from disclosed financials tied to leading AI labs rather than relying solely on usage proxies, Barclays presents a framework that shifts the expected cadence and magnitude of capital deployment. The models implications touch on vendor revenue, supply chain planning and how investors position hardware stocks ahead of an extended capex cycle.


Key points

  • Barclays' disclosure-based framework suggests hyperscaler capex will remain elevated into at least 2028 and could be more than $225 billion higher than consensus for CY27 and CY28.
  • The bank views this discrepancy as a material positive for AI semiconductor companies and related infrastructure vendors, with Nvidia specifically cited as appearing to price in an earlier peak.
  • OpenAI and Anthropic are estimated to represent roughly two-thirds of today's compute demand, but other labs and sovereign initiatives could expand their share and add to future infrastructure needs.

Risks and uncertainties

  • The Barclays baseline assumes existing training chips will be sufficient for most inference workloads starting in 2027; if this proves false, additional inference-specific silicon spending would increase projected capex further.
  • Timing assumptions about when AI labs reach recursive self-improvement drive the peak-capex estimate; if that timing shifts, the projected peak in infrastructure spending could move as well.
  • Concentration of compute demand today implies that changes in the growth trajectories of major labs or the entry of other developers could materially alter the scale and allocation of future hyperscaler capex.

Risks

  • Barclays' base-case assumes existing training chips will handle most inference workloads from 2027; if that is incorrect, additional inference-specific spending could increase capex projections.
  • The projected peak in hyperscaler capex depends on expectations about when AI labs achieve recursive self-improvement; shifts in that timeline would alter the capex peak timing.
  • Concentration risks: changes in growth or market share among leading AI labs or the emergence of other developers could materially change compute demand and infrastructure spending patterns.

More from Stock Markets

Hong Kong IPO Fundraising Soars Tenfold in Early March Mar 11, 2026 TSX Futures Muted as Iran Conflict and U.S. Inflation Data Dominate Market Focus Mar 11, 2026 JPMorgan Lifts Oracle to Overweight, Cites Improved Risk-Reward After Heavy Selloff Mar 11, 2026 Serve Robotics Shares Jump After White Castle Teams Up for Robotic Deliveries on Uber Eats Mar 11, 2026 Barclays Upgrade Fuels 2% Gain in Nike Shares as Firm Sees 'Fundamental Bottom' Mar 11, 2026