Stock Markets March 11, 2026

Meta Accelerates Data Center Push with Four New Custom AI Chips

Company lays out rapid MTIA rollout to support ranking, recommendations and generative AI inference across its data centers

By Maya Rios META
Meta Accelerates Data Center Push with Four New Custom AI Chips
META

Meta said it has developed four in-house artificial intelligence chips under its MTIA family and plans to deploy multiple new generations over the next two years to support recommendation engines and generative AI inference workloads. The company described a faster-than-normal chip cadence driven by rapid data center capacity growth and heavy capital spending, with initial deployments already underway and additional chip versions coming into service through 2027.

Key Points

  • Meta is rolling out four new generations of MTIA custom AI chips over the next two years to support ranking, recommendations and generative AI inference.
  • The company says custom silicon, manufactured by Taiwan Semiconductor, improves price-per-performance and provides greater diversity in silicon supply—helping insulate Meta from some price shifts.
  • MTIA 300 has already been deployed for smaller-model training that underpins ranking and recommendation systems; MTIA 400 has completed testing and is nearing deployment, while two additional chips are slated to be operational in 2027. Sectors impacted include data centers, semiconductors, cloud/AI infrastructure and digital advertising platforms.

Meta announced on Wednesday that it has designed four new custom chips for artificial intelligence workloads as part of a broader data center expansion strategy. The silicon belongs to the Meta Training and Inference Accelerator, or MTIA, family, a lineup the company first introduced publicly in 2023 and updated with a second-generation release in 2024.

The company said it plans to develop and roll out four successive generations of MTIA chips within the next two years. These chips are intended to accelerate both the ranking and recommendation systems that power core application experiences and newer generative AI inference tasks that produce images and video from text prompts.

Cadence and supply

Meta described the planned pace of release as substantially quicker than typical chip cycles. Yee Jiun Song, Meta's Vice President of Engineering, told CNBC that the company is designing silicon in-house and having it manufactured by Taiwan Semiconductor. Song said building custom chips allows Meta to improve price-per-performance across its data center fleet rather than relying solely on external vendors.

In addition to potential cost and performance gains, Song said the approach gives Meta greater diversity in silicon supply and provides some insulation from price fluctuations. "This is a little bit more leverage," he said.

Where the chips will be used

The first of the new chips, MTIA 300, was deployed a few weeks ago and is intended to assist in training smaller models that support ranking and recommendation functions. Those tasks include deciding what content and advertising to surface to users across Meta's apps, including Facebook and Instagram.

Subsequent MTIA generations are aimed at inference for generative AI tasks such as creating images and video from user prompts. Song emphasized that these chips are not intended for training very large language models.

Meta said it has completed testing of the MTIA 400 and is "on the path to deploying it in our data centers," while the remaining two chips are expected to be operational in 2027.

Investment and useful life

Song noted that it is unusual for a silicon organization to release a new chip every six months and said the rapid cadence reflects how quickly Meta is expanding capacity and increasing capital expenditures. The company expects the MTIA chips to have a standard useful lifetime of five-plus years.

Data center footprint

Meta's AI infrastructure investments include a data center in Louisiana and two facilities in Ohio and Indiana. The company is also reportedly pursuing leased space at the Stargate site in Texas after OpenAI and Oracle scrapped plans to expand that AI data center site.


Implications and context

  • Meta is moving to a faster release cadence for in-house AI silicon to match rapid growth in data center capacity and capital spending.
  • The MTIA family covers both model training for smaller internal models that support ranking and recommendations and inference for generative AI workloads; Meta said the chips will not be used to train very large language models.
  • Manufacturing is being handled by Taiwan Semiconductor, according to Meta's engineering leadership.

Final note

Meta's update outlines a strategy of tighter integration between its software needs and bespoke hardware design, deployed at a quicker-than-normal pace to align with rapid data center expansion and significant CapEX commitments.

Risks

  • Rapid six-month chip cadence is atypical for silicon development and may increase execution risk for the semiconductor development and data center operations sectors.
  • Heavy ongoing capital expenditure to expand data center capacity creates exposure for Meta's infrastructure spending plans and may affect related markets such as data center construction and power delivery.
  • Reliance on a single manufacturer for production introduces supply concentration risk for semiconductor sourcing and the broader cloud infrastructure supply chain.

More from Stock Markets

International upstream oil and gas M&A remains muted in 2025, Enverus finds Mar 11, 2026 Thousands of JBS Workers in Greeley Set to Strike, Halting Production as Beef Prices Climb Mar 11, 2026 Analysts Cut Tesla Delivery Forecasts as Heavy Spending Raises Cash-Flow Concerns Mar 11, 2026 Nomura Elevates Nio to Buy Citing Improving Margins and Delivery Momentum Mar 11, 2026 Warsaw stocks slip as energy and construction shares weigh on WIG30 Mar 11, 2026