San Francisco, March 13 - When Nvidia's CEO takes the stage to open the company's multi-day GTC developer conference, the presentation is widely anticipated to focus on a suite of products and collaborations intended to keep Nvidia ahead of an expanding competitive field.
The conference - a four-day event that has become Nvidia's marquee venue for publicizing advances in its graphics processing units, data-center hardware, CUDA chip-programming environment, conversational AI agents and physical AI such as robotics - will be watched closely by investors. Many are looking for confirmation that Nvidia's strategy of reinvesting profits into an AI ecosystem is delivering tangible results.
Analysts expect the company to provide a comprehensive roadmap update spanning chip generations known internally as Rubin to Feynman, with particular emphasis on inference workloads, agentic AI, networking and so-called AI factory infrastructure, according to eMarketer analyst Jacob Bourne.
Nvidia's processors today anchor hundreds of billions of dollars in data-center investments by governments and corporations around the world. Even so, the company confronts rising pressure from traditional chip rivals and from some of its largest customers, which are developing their own application-specific silicon.
Industry observers say the overall market for AI chips should continue to expand, but they also anticipate Nvidia's market share to contract somewhat as use-cases shift. A key dynamic is the movement from large-scale training workloads - in which clusters of Nvidia chips are linked to process vast datasets to train AI models - toward far more numerous inference tasks. These inference tasks are executed by AI agents that move between applications to perform actions for human users.
Those agentic workloads are expected to proliferate to such an extent that they could require a new orchestration layer - an intermediary between human users and fleets of agents - to coordinate activity. In one respect, analysts say, that trend validates the increasing practical utility of AI. However, inference workloads can be performed on a broader range of silicon types, including bespoke chips that major customers like OpenAI and Meta can develop for themselves. Meta, the company noted in public comments, plans to release new AI chips on a six-month cadence.
"Nvidia is definitely going to see more competition compared to a year ago," said KinNgai Chan, a managing director at Summit Insights Group. At present, Nvidia retains close to over 90% market share in both the training and inference markets, Chan said. He added that the firm expects Nvidia's share to begin slipping in 2027 as in-house ASIC programs scale, particularly in inference workloads. ASICs - application-specific integrated circuits - are tailored to focused tasks and can deliver greater efficiency than general-purpose GPUs.
Defensive moves and acquisitions
To bolster its position, Nvidia completed a $17 billion acquisition in December of Groq, a start-up that specializes in low-cost, ultra-fast inference processing. Company executives have indicated that GTC will demonstrate how Groq's technology can be integrated into Nvidia's existing CUDA platform.
William McGonigle, an analyst at Third Bridge, expects Nvidia to introduce a new family of servers that pair Groq's chips with Nvidia's networking components to deliver both speed and cost efficiency. The company has also been discussing central processor units, or CPUs, which some see regaining prominence as the agentic AI paradigm shifts bottlenecks toward orchestration tasks carried out on CPUs.
McGonigle noted that Nvidia may highlight servers using only its CPUs - a topic CEO Jensen Huang referenced on a recent earnings call - as part of the company's strategy to address orchestration-level constraints.
Optical interconnects and scaling challenges
Analysts further expect Nvidia to explain its rationale for investing $2 billion each in Lumentum and Coherent, both suppliers of lasers used to send data between chips via beams of light. These co-packaged optics technologies are proposed as a way to accelerate connections among chips within large AI clusters.
However, current production volumes of such optical components do not match the number of chips Nvidia moves each year, creating a scalability and cost challenge. "Nvidia will likely frame co-packaged optics as key to connecting massive AI clusters more efficiently, but the challenge is making it affordable enough to deploy at scale," Bourne said.
Market implications and investor focus
Investors attending or observing GTC will be looking for evidence that Nvidia's investments - across acquisitions, internal CPU development and optical interconnects - are cohering into a commercially viable technology stack that defends its dominant market position as competitors and large customers pursue alternative silicon strategies.
Observers expect announcements at the conference to touch on product roadmaps, partnerships that integrate acquired technologies, and infrastructure approaches to reduce latency and cost in massive data-center deployments. Nvidia's emphasis on both hardware and the software layer that controls it reflects the company's long-stated focus on delivering a full-stack solution for AI workloads.
Additional note on investment tools
The public copy of coverage also referenced a monthly investment-evaluation tool, ProPicks AI, which assesses companies including Intel Corporation (INTC) against a range of financial metrics. The tool was described as identifying stock ideas using AI-driven analysis, but further detail on its methodology or recommendations was not provided in the event coverage.
Disclosure: