Meta announced on Wednesday that it has designed four new custom chips for artificial intelligence workloads as part of a broader data center expansion strategy. The silicon belongs to the Meta Training and Inference Accelerator, or MTIA, family, a lineup the company first introduced publicly in 2023 and updated with a second-generation release in 2024.
The company said it plans to develop and roll out four successive generations of MTIA chips within the next two years. These chips are intended to accelerate both the ranking and recommendation systems that power core application experiences and newer generative AI inference tasks that produce images and video from text prompts.
Cadence and supply
Meta described the planned pace of release as substantially quicker than typical chip cycles. Yee Jiun Song, Meta's Vice President of Engineering, told CNBC that the company is designing silicon in-house and having it manufactured by Taiwan Semiconductor. Song said building custom chips allows Meta to improve price-per-performance across its data center fleet rather than relying solely on external vendors.
In addition to potential cost and performance gains, Song said the approach gives Meta greater diversity in silicon supply and provides some insulation from price fluctuations. "This is a little bit more leverage," he said.
Where the chips will be used
The first of the new chips, MTIA 300, was deployed a few weeks ago and is intended to assist in training smaller models that support ranking and recommendation functions. Those tasks include deciding what content and advertising to surface to users across Meta's apps, including Facebook and Instagram.
Subsequent MTIA generations are aimed at inference for generative AI tasks such as creating images and video from user prompts. Song emphasized that these chips are not intended for training very large language models.
Meta said it has completed testing of the MTIA 400 and is "on the path to deploying it in our data centers," while the remaining two chips are expected to be operational in 2027.
Investment and useful life
Song noted that it is unusual for a silicon organization to release a new chip every six months and said the rapid cadence reflects how quickly Meta is expanding capacity and increasing capital expenditures. The company expects the MTIA chips to have a standard useful lifetime of five-plus years.
Data center footprint
Meta's AI infrastructure investments include a data center in Louisiana and two facilities in Ohio and Indiana. The company is also reportedly pursuing leased space at the Stargate site in Texas after OpenAI and Oracle scrapped plans to expand that AI data center site.
Implications and context
- Meta is moving to a faster release cadence for in-house AI silicon to match rapid growth in data center capacity and capital spending.
- The MTIA family covers both model training for smaller internal models that support ranking and recommendations and inference for generative AI workloads; Meta said the chips will not be used to train very large language models.
- Manufacturing is being handled by Taiwan Semiconductor, according to Meta's engineering leadership.
Final note
Meta's update outlines a strategy of tighter integration between its software needs and bespoke hardware design, deployed at a quicker-than-normal pace to align with rapid data center expansion and significant CapEX commitments.