Nvidia this week expanded its revenue opportunity forecast for chips used in artificial intelligence, saying the market for those products could be at least $1 trillion through 2027. That projection doubles a forecast the company made just last month, when it estimated $500 billion in potential revenue through 2026 tied to its Blackwell and Rubin families of processors.
At the company’s annual GTC developer conference in San Jose, California, CEO Jensen Huang unveiled a new central processor and an AI system that incorporates technology Nvidia licensed from Groq as part of a $17 billion deal announced in December. Huang also offered an early look at a longer-term plan, previewing the Feynman chip architecture planned for 2028 – a follow-on to the Rubin Ultra series – while otherwise listing a broad slate of intended products that includes additional AI processors and networking components.
The announcements underscore Nvidia’s effort to broaden its foothold beyond the model-training GPU market into inference computing - the segment that handles real-time AI responses to user queries. While Nvidia’s graphics processors have dominated training workloads, the inference phase is attracting increasing attention from central processing units and custom silicon created by other firms.
“The inference inflection has arrived,” Huang said at the GTC event. “And demand just keeps on going up.”
Nvidia is also pursuing tools aimed at autonomous AI agents. One such product, NemoClaw, integrates with the OpenClaw platform and is pitched as a way to introduce privacy and safety controls to systems capable of performing many tasks with minimal human intervention.
Even with product expansions and technology partnerships, investors have raised questions about whether the company can sustain its recent growth trajectory and earn returns on the capital it is plowing back into the AI ecosystem. After an extraordinary multi-year run that culminated in Nvidia becoming the first public company to reach a $5 trillion market valuation last October, concerns have mounted about the longevity of the AI infrastructure boom and how future spending patterns might evolve.
The chipmaker reported results for the January quarter that beat expectations and provided a current-quarter revenue forecast above analyst estimates. Yet, despite that momentum, Nvidia shares have been largely unchanged since September. The stock’s price-to-earnings multiple has contracted to roughly 17 times fiscal 2028 consensus estimates, according to Vital Knowledge.
Analysts at Vital Knowledge argued that the central challenge facing Nvidia - and the broader AI industry - is investor unease about whether the infrastructure surge is sustainable. They said people worry the boom “isn’t sustainable and could give way to a sharp cliff at some point in the next few years as companies shift their focus back to prioritizing free cash flow.”
Vital Knowledge also pointed to Nvidia’s aggressive strategy of investing in AI startups, a practice that has prompted “accusations of the firm ’buying’ revenue.” The firm warned that as spending emphasis pivots from training to inference, GPUs might be “unnecessarily powerful and expensive” for many inference tasks, leaving Nvidia vulnerable to pricing pressure.
“Nvidia could be forced to compete on price, and sacrifice margin, to fend off competitors,” the Vital Knowledge analysts wrote, noting that the company faces intensifying competition from custom silicon developed by the likes of Broadcom and Marvell Technology as well as third-party processors from AMD and other players.
Vital Knowledge also expressed reservations about the recently licensed Groq technology, suggesting it could “simply cannibalize more expensive Vera Rubin chips rather than meaningfully expand the firm’s TAM.”
The combination of robust product announcements, elevated market opportunity estimates, and superior near-term results has not yet translated into renewed multiple expansion for the shares. That leaves investors parsing how much growth is already priced in versus how much incremental free cash flow and margin durability Nvidia can realistically deliver as the industry’s workload mix shifts.
Key takeaways
- Nvidia raised its addressable AI-chip revenue estimate to at least $1 trillion through 2027, up from $500 billion through 2026 tied to Blackwell and Rubin lines.
- The company introduced new processors, previewed a 2028 Feynman architecture, and highlighted efforts in inference computing and autonomous AI agents via NemoClaw and OpenClaw integration.
- Despite better-than-expected quarterly results and raised guidance, shares have been flat since September with the P/E compressing to around 17x fiscal 2028 estimates, reflecting investor concerns over sustainability of AI infrastructure demand and margin pressure from competition.
Risks and uncertainties
- Sustainability risk - Investors worry the AI infrastructure boom may be unsustainable, which could impact hardware demand across the semiconductor sector and enterprise AI deployments.
- Return-on-investment risk - Heavy reinvestment and acquisitions in startups raise questions about whether those commitments will generate commensurate free cash flow and returns, affecting capital markets and corporate finance considerations.
- Competitive and margin risk - Growing competition from custom chips by Broadcom and Marvell and third-party processors from AMD could force price competition and margin erosion in inference workloads, affecting semiconductor margins and vendor profitability.