Barclays has produced a model that ties disclosed financial information from major AI labs to implied chip spending by hyperscale technology companies, arguing this approach provides a more direct read on infrastructure investment than methods based on token consumption or query volumes. The bank's conclusions point to a larger and longer AI infrastructure investment cycle than many market participants currently expect.
In a note outlining the framework, analysts led by Tom OMalley wrote that their work indicates "the capex up-cycle lasts into at least 2028 and could also be magnitudes larger vs. consensus ($225B+ in CY27/CY28)." The projection implies that capital expenditure by hyperscalers in 2027 and 2028 could exceed current Street forecasts by more than $225 billion, a divergence Barclays characterizes as a "material positive" for companies that make AI semiconductors and related infrastructure components.
The bank highlights a disconnect between market pricing and its scenario for the timing of the spending peak. Barclays notes that Nvidia (NASDAQ:NVDA) appears to be valued by the market as if hyperscaler capex will plateau in 2027, while the banks framework projects spending will continue to rise into 2028.
Central to Barclays' timeline is an expectation that AI research labs will reach a phase of recursive self-improvement that materially raises compute efficiency. Under the banks assumptions, hyperscale capital expenditure would peak in 2028 because infrastructure must be deployed before the subsequent ramp in training workloads. Barclays further assumes that operating expenses for training would reach their maximum in 2029, implying the capital deployment peak precedes the training cost peak by approximately one year.
The analysts acknowledge their baseline projection may still be conservative. Their model assumes that existing training-class chips will be adequate to handle the majority of inference workloads beginning in 2027. If that assumption proves incorrect, additional procurement of inference-specific silicon would raise future spending beyond the current Barclays estimates.
Barclays also anticipates that the concentration of compute demand will likely broaden over time. Today, OpenAI and Anthropic are estimated to account for roughly two-thirds of global AI compute demand, according to the banks calculations. However, other AI labs and platforms - including entities referenced as Gemini, Grok and various sovereign AI programs - could gradually capture a larger share of compute needs, which would further expand the market for infrastructure investment.
By deriving implied chip spend from disclosed financials tied to leading AI labs rather than relying solely on usage proxies, Barclays presents a framework that shifts the expected cadence and magnitude of capital deployment. The models implications touch on vendor revenue, supply chain planning and how investors position hardware stocks ahead of an extended capex cycle.
Key points
- Barclays' disclosure-based framework suggests hyperscaler capex will remain elevated into at least 2028 and could be more than $225 billion higher than consensus for CY27 and CY28.
- The bank views this discrepancy as a material positive for AI semiconductor companies and related infrastructure vendors, with Nvidia specifically cited as appearing to price in an earlier peak.
- OpenAI and Anthropic are estimated to represent roughly two-thirds of today's compute demand, but other labs and sovereign initiatives could expand their share and add to future infrastructure needs.
Risks and uncertainties
- The Barclays baseline assumes existing training chips will be sufficient for most inference workloads starting in 2027; if this proves false, additional inference-specific silicon spending would increase projected capex further.
- Timing assumptions about when AI labs reach recursive self-improvement drive the peak-capex estimate; if that timing shifts, the projected peak in infrastructure spending could move as well.
- Concentration of compute demand today implies that changes in the growth trajectories of major labs or the entry of other developers could materially alter the scale and allocation of future hyperscaler capex.