Meta signed a new agreement with Amazon Web Services to expand its artificial intelligence infrastructure. In this context, the company will deploy AWS’s Graviton processors on a large scale. This step, which expands the scope of long-standing cooperation, is a concrete indicator of Meta’s infrastructure plans for new generation artificial intelligence systems.
In the first phase, tens of millions of Graviton cores are planned to be put into use. However, it is stated that the infrastructure has a flexible structure that can be expanded as Meta’s artificial intelligence capacity grows. This choice points to a significant change in how the artificial intelligence infrastructure is shaped. While graphics processing units are still critical for training large models, the rise of systems defined as agentic AI is increasing the need for CPU-intensive workloads. This includes operations such as real-time reasoning, code generation and management of multi-step processes.
Meta’s preferred Graviton5 processors offer an architecture designed especially for such workloads. This new generation chip, which has 192 cores, stands out with its five times larger cache capacity compared to the previous generation. This structure can reduce the communication delay between cores by up to 33 percent. Thus, higher bandwidth and faster data processing become possible. These features provide direct benefits for agentic AI systems that continuously evaluate and execute multi-step processes.
AWS Graviton infrastructure supports Meta’s agentic AI goals
Graviton processors, built on AWS’s Nitro System infrastructure, offer a customized structure in terms of performance and security. While this system provides higher efficiency by separating hardware and software components, it also supports bare-metal usage scenarios that allow direct access to hardware. Despite this, compatibility with AWS services such as Elastic Network Adapter and Amazon EBS is maintained. Thus, Meta can run its own virtual machine structures without losing performance.
In addition, Graviton5-based examples offer Elastic Fabric Adapter support, making low-latency and high-bandwidth communication possible. This feature is critical for large-scale artificial intelligence tasks that require large numbers of processors to work in coordination. For Meta’s systems that process billions of user interactions, such infrastructure advantages are directly reflected in performance.
Statements made by Meta and AWS executives also clarify the scope of the agreement. Senior executives working on the AWS side emphasize that this collaboration offers a broad infrastructure that includes not only hardware but also data processing and inference services. The commodity front, on the other hand, points out that diversifying processing power sources has become a strategic necessity. Especially in CPU-intensive workloads, the balance of efficiency and performance plays a decisive role in this choice.
Another striking element is that Graviton5 was developed with a 3 nanometer production process. The fact that AWS controls the entire process, from chip design to server architecture, provides advantages in terms of overall performance and energy efficiency. In this way, a performance increase of up to 25 percent compared to the previous generation is achieved, while a more balanced structure is offered on the energy consumption side.
In a period when the need for processing power for artificial intelligence applications is increasing, efficient scaling of the infrastructure becomes decisive in terms of cost and sustainability. Meta’s focus on Graviton processors points to the increasing importance of CPU-based workloads and makes the role of special-purpose chips in this field more visible. This approach can contribute to providing faster and more consistent artificial intelligence experiences on platforms that serve large user bases.
In order not to miss the technology agenda, 📰 add it to Google News, 💬 join our WhatsApp channel, ▶ subscribe to YouTube, 📷 follow us on Instagram and 𝕏 X.