Meta Platforms announced on Friday that it will incorporate Amazon Web Services' Graviton5 CPU chips into its compute environment, in a multi-year agreement that an AWS executive told Reuters would be valued in the billions of dollars. The arrangement, AWS said, will bring "tens of millions of cores" of Graviton processing capacity to Meta.
AWS’ Graviton5 is the fifth generation of the cloud provider’s in-house central processing unit, a program the company has pursued since 2018. Amazon buys those chips directly from Taiwan Semiconductor Manufacturing Co. (TSMC), and each Graviton5 chip contains 192 cores, though those cores can be partitioned and assigned to different tasks within customer workloads.
Nafea Bshara, vice president and distinguished engineer at Amazon Web Services, said, "We pass that savings on to the customers," and reiterated that the Meta engagement would stretch across multiple years and be worth billions. The comment highlights Amazon’s pitch that its custom silicon delivers cost advantages that can be transferred to cloud customers.
The announcement arrives amid an AI-driven resurgence in demand for CPUs. While graphics processing units made by companies such as Nvidia remain essential for training artificial intelligence models, the trained models frequently run on CPUs once they are deployed, a dynamic that is helping bring renewed focus to the CPU market. Intel this week noted that CPU prices were rising as demand increased.
Meta’s deal with AWS complements the company’s prior chip arrangements. The firm has previously signed large agreements with Nvidia and Advanced Micro Devices, and has collaborated with Arm Holdings on Arm’s new CPU designs. Santosh Janardhan, head of infrastructure at Meta, framed the AWS partnership in strategic terms: "As we scale the infrastructure behind Meta’s AI ambitions, diversifying our compute sources is a strategic imperative," he said in a statement.
This partnership illustrates several themes now shaping cloud and AI infrastructure decisions: the emergence of custom server CPUs as a competitive lever for cloud providers, the economics of in-house silicon and foundry manufacturing, and the operational distinction between GPU-dependent model training and CPU-based inference and deployment.
Market participants and enterprise customers watching compute strategies will likely view the Meta-AWS deal as another example of large cloud clients spreading demand across multiple chip suppliers and architectures while seeking cost efficiencies from providers that design and procure their own silicon.