Nvidia used its annual GTC developer conference to emphasize a transition in enterprise AI spending - from the heavy compute of model training toward the high-volume, real-time demands of inference and large-scale deployment. The company portrayed that shift as a major growth driver for its chips over the next several years.
At the event, Nvidia’s chief executive described demand for graphics processing units as "skyrocketing," pointing to a roughly millionfold increase in computing requirements over the past two years as inference workloads scale up. That surge underpins the company’s revised revenue view for its AI hardware.
Nvidia told attendees that the revenue opportunity for its artificial intelligence chips could reach at least $1 trillion through 2027. That new projection represents an increase from the prior $500 billion opportunity through 2026 the company had cited for its Blackwell and Rubin families of processors.
As part of the announcements, Nvidia introduced a new central processor and unveiled an AI system that incorporates technology licensed from Groq, the chip start-up whose designs Nvidia acquired rights to for $17 billion in December. The moves are framed as steps to expand Nvidia’s footprint in inference computing - the process of answering queries - an area that now faces more direct competition from central processing units and custom inference chips developed by other firms.
Historically, Nvidia has been dominant in model training, which required the massive parallel compute of GPUs. The company emphasized at GTC that inference is becoming the next major phase for AI infrastructure. "The inference inflection has arrived," the company said during the keynote. "And demand just keeps on going up."
Nvidia provided details on how it expects specific chips to be deployed within inference workflows. The company said its Vera Rubin chips will address the "prefill" step - transforming user inputs into tokens that feed AI systems - while Groq-derived chips will handle the "decode" stage that generates the model responses.
Looking beyond Rubin, Nvidia referenced a Feynman roadmap expected in 2028 following Rubin Ultra, with limited public detail other than that future generations will include both AI and networking chips. The company also announced initiatives aimed at autonomous AI agents. NemoClaw, which integrates with the OpenClaw platform, was presented as a tool to add privacy and safety controls to agentic systems capable of executing tasks with minimal human direction.
On the market reaction, Nvidia’s shares briefly ticked higher after the conference before trimming gains to finish the day up 1.65%.
Analysts react to GTC and the $1 trillion outlook
Wall Street research teams parsed the $1 trillion projection as a notable signal about demand trajectory and product positioning. Several of the firms that commented framed the disclosure as a conservative baseline that leaves room for upside.
Wolfe Research characterized the updated disclosure as an increase to the prior $500 billion estimate and said the new figure "suggests upside to CY27 revenue, and the company noted that demand was still growing." Wolfe added: "We consider this revenue disclosure to be ambiguous enough so as to not reflect firm guidance, yet still provides significant room for upside vs. consensus. As such, we consider this revenue level to be a floor, not a ceiling."
Bernstein noted that the $1 trillion number, like the earlier $500 billion figure, represents a snapshot with several quarters remaining before CY27 ends. The firm said: "More importantly, Colette confirmed to us that the number includes ONLY Blackwell and Rubin (and associated networking); it does NOT include any other products (such as Groq LPUs, CPX, CPU racks etc). Hence, we suspect datacenter will come in well above this $1T target, and well above expectations." Bernstein added that Nvidia’s roadmap looks strong and that new offerings should help secure its inference position as it already dominates training.
Goldman Sachs said Nvidia provided visibility into a strong 2027 growth outlook consistent with its estimates and well above the Street. The firm indicated that Nvidia’s introduction of Groq’s LPX rack reinforces the company’s commitment to inference - a critical and increasingly competitive segment within AI infrastructure.
Morgan Stanley emphasized cost-per-token leadership for Nvidia-based inference that the firm expects to improve with Rubin. Their checks led them to the view that Nvidia’s market share will be more stable than some expect and that AI spending strength should persist. Morgan Stanley continues to rate Nvidia as Top Pick in semiconductors.
Stifel highlighted the headline disclosure of $1 trillion in cumulative purchase order visibility for the Grace Blackwell and Vera Rubin platforms through CY2027, interpreting the figure as confirmation of accelerating demand and continued "AI Factory" build-out. The firm pointed to strategic elements including the unbundling of CPU and networking stacks, integration of Groq LPUs to capture inference edges, and the launch of OpenClaw/NemoClaw, which it described as an "HTTPS moment" for Agentic AI.
Implications for the market
Taken together, Nvidia’s announcements and the analyst reactions suggest that investors and customers should expect an expanding market for inference-focused hardware and systems. The company’s positioning across GPUs, licensed Groq technology, CPUs and networking could influence demand patterns across datacenter equipment and cloud infrastructure.
Analysts cautioned that the $1 trillion figure is narrowly defined to include specific platforms and networking and does not incorporate other product families. Several research teams interpreted that omission as a source of potential upside for datacenter spending beyond the $1 trillion baseline.
At present, the market response was positive but measured, reflecting both enthusiasm about the extended opportunity and the recognition that the disclosed figure is intended as a conservative starting point rather than definitive guidance.
Summary of key takeaways
- Nvidia raised its addressable revenue outlook for AI chips to at least $1 trillion through 2027, up from a previously cited $500 billion through 2026 for specific chip families.
- The company signaled a strategic push into inference computing with new processors, an AI system integrating Groq technology, and platform work around agentic AI via NemoClaw and OpenClaw.
- Analysts largely view the $1 trillion disclosure as a conservative floor that leaves room for upside, while noting it covers only certain products and associated networking.
Risks and uncertainties
- Scope of the $1 trillion figure - the disclosure explicitly includes Blackwell and Rubin platforms plus networking but excludes other Nvidia products, which leaves ambiguity about total datacenter revenue outcomes.
- Competition in inference - custom CPUs and alternative inference chips from other firms present a growing competitive challenge in the market Nvidia seeks to expand into.
- Market reaction variability - initial share gains following announcements were pared back, underscoring that investor responses may be mixed as more detail is needed to convert visibility into firm guidance.