- Meta and Arm announced a multi-year partnership to optimize AI workloads across Meta’s data centers and consumer platforms.
- Meta’s AI ranking and recommendation systems — core to Facebook and Instagram — will now run on Arm’s Neoverse-based chips.
- The collaboration focuses on power efficiency, targeting both megawatt-scale data centers and milliwatt-level on-device AI.
- PyTorch, ExecuTorch, and vLLM are being optimized for Arm architectures, improving inference speed and energy use.
- The companies will open-source their joint optimizations, supporting broader AI adoption across devices and frameworks.
- This positions Arm as a viable alternative to x86 systems from Intel and AMD, and a new player in large-scale AI compute.
The collaboration aligns Meta’s AI infrastructure with Arm’s low-power architecture to scale machine learning from megawatts to milliwatts.
Meta and Arm have entered a strategic, multi-year partnership designed to make artificial intelligence more power-efficient and scalable across every layer of computing — from hyperscale data centers to smartphones and wearables.
The collaboration, announced this week, will bring Arm’s Neoverse data center architecture into Meta’s infrastructure stack, marking a significant move away from traditional x86 chips by Intel and AMD. In a joint statement, the companies said the goal is to “enable richer, more accessible AI experiences for billions of people worldwide.”
“From the experiences on our platforms to the devices we build, AI is transforming how people connect and create,” said Santosh Janardhan, Meta’s Head of Infrastructure. “Partnering with Arm enables us to efficiently scale that innovation to the more than 3 billion people who use Meta’s apps and technologies.”
Optimizing Meta’s Core AI Systems
Meta’s AI ranking and recommendation systems, which personalize feeds and content discovery across Facebook, Instagram, and Threads, will now leverage Arm’s Neoverse-based platforms. These chips are designed to deliver higher performance-per-watt, a key metric as Meta builds a global network of AI data centers to power its next generation of large language and vision models.
According to Arm CEO Rene Haas, the collaboration is a response to a growing industry challenge: rising power consumption from AI compute.
“AI’s next era will be defined by delivering efficiency at scale. Partnering with Meta unites Arm’s performance-per-watt leadership with Meta’s AI innovation to bring smarter, more efficient intelligence everywhere.”
Benchmarks shared by both companies suggest Meta’s infrastructure could achieve performance parity with x86 systems while consuming significantly less power, an advantage that directly supports Meta’s ongoing multi-gigawatt data center expansion.
Scaling AI From Cloud to Edge
Beyond data centers, the partnership extends to on-device and edge inference, optimizing AI runtimes such as PyTorch’s ExecuTorch and vLLM using Arm’s KleidiAI acceleration library.
The companies jointly tuned Meta’s open-source AI stack — including compilers, math libraries, and the FBGEMM matrix multiplication engine — for Arm architectures. These optimizations, now being contributed back to open source, are expected to benefit millions of developers worldwide building AI-powered apps on Arm chips.
This deep integration supports Meta’s broader vision of an AI ecosystem that operates seamlessly across environments — from the data center-level model training to real-time inference on mobile devices.
An Open AI Ecosystem at Scale
Both companies emphasized that the partnership is rooted in open collaboration, with Meta’s optimized AI software stacks — including PyTorch, ExecuTorch, and vLLM — being contributed to the open-source community.
This approach aligns with Meta’s long-standing strategy of democratizing its internal tools, while also expanding Arm’s influence across the global AI developer ecosystem.
From Meta’s perspective, the collaboration ensures that its AI models can run more efficiently across both cloud and consumer hardware, extending the reach of its personalization and generative AI systems to billions of devices.
AI’s Efficiency Revolution Begins
The Arm–Meta partnership represents a major inflection point in AI infrastructure design: one defined not just by raw compute power, but by energy efficiency and accessibility.
By co-developing AI systems that scale “from megawatts to milliwatts,” the companies are signaling a shift toward sustainable AI compute — one that could reshape how future AI models are trained, deployed, and experienced across the world’s most-used platforms.
As Meta’s global infrastructure expands and Arm’s technology continues to move up the stack, the collaboration could redefine what “efficient intelligence everywhere” truly means.