Arm announced its first in-house chip on March 24 — the Arm AGI CPU. No, AGI here does not stand for artificial general intelligence. It stands for Agentic AI Infrastructure, which is what Arm's new server processor is designed to accelerate. For a company that spent 35 years licensing its chip architecture to others, building its own silicon is a structural shift worth taking seriously on its own terms. That Meta — a company burning $115 billion to $135 billion in capital expenditures this year alone — was the one to push Arm into the manufacturing business tells you where the leverage has shifted in AI infrastructure.
The chip itself is a dual-chiplet design built on TSMC 3nm process, packing 136 Neoverse V3 cores running at 3.7 GHz with no turbo boost and no simultaneous multithreading, according to Arm's own announcement and independent analysis from The Next Platform. Each core gets 2 MB of L2 cache. The memory subsystem runs 12 channels of DDR5-8800, and the chip connects via 96 PCIe Gen6 lanes. The thermal design point is 300 watts. The 136-core count is itself a yield story: Arm targets 144 cores per socket, and at 94.4 percent gross yield, 136 is what you ship.
Arm's headline claim is more than two times the performance per rack compared to x86 platforms, with a per-watt figure of the same ratio. That framing — per rack, not per core — is deliberate. A reference configuration shows 8,160 cores in a single 36 kilowatt air-cooled rack. A liquid-cooled Supermicro design pushes toward 45,000 cores. Density is the point. The per-thread performance advantage over an unspecified x86 core — which The Next Platform presumes is an Intel Granite Rapids P-core — is closer to 1.3x. The two-times number holds at the rack level because of core count, not architectural supremacy. Arm's own announcement carries the caveat: performance claims are based on Arm's own estimates comparing a fully populated rack of AGI CPU-based servers against comparable x86 configurations. Actual results will vary.
Meta is the lead partner and co-developer. Santosh Janardhan, head of infrastructure at the social media and AI company, said Meta worked alongside Arm to build a platform that improves performance density and supports a multi-generation roadmap for evolving AI systems. The relationship is deeper than a typical launch partnership. Arm spent $71 million over roughly 18 months building three new chip lab rooms at its Austin, Texas campus, and the first conversations about turning Arm's architecture into a finished product started with Meta asking for exactly that, according to reporting by CNBC. A Meta software engineer, Paul Saab, said the intent from the start was to make the chip available broadly rather than keep it internal.
The partner list beyond Meta is a comma list with more signal in its gaps than its contents. OpenAI, Cerebras, Cloudflare, F5, Positron, Rebellions, SAP, and SK Telecom appear as launch partners. OpenAI's head of industrial compute said the chip will strengthen the orchestration layer coordinating large-scale AI workloads. Nvidia CEO Jensen Huang appeared in a recorded statement. But Qualcomm — which won a court ruling against Arm last fall over licensing agreement terms — was not in the congratulatory video, as The Verge noted. That's a relationship with history.
The financial stakes are significant. Arm told Wall Street it could drive $15 billion in revenues from AGI CPU products by 2031. The company's stock rose roughly 15 percent the day after the announcement, pushing its market capitalization to about $164 billion. SoftBank still owns around 90 percent of Arm shares. For context: Arm is not a startup making projections. It is an infrastructure licensing company that spent 35 years not building chips, now projecting a revenue line that rivals some of the largest chip companies in the world.
The AGI acronym has predictably caused confusion. Arm's own announcement clarifies the expansion, but the marketing material leads with "AGI CPU" in a font size that suggests the ambiguity is at minimum useful. The actual capability claim — agentic AI infrastructure — is more prosaic and more interesting: this is a chip optimized for the workload pattern of AI agents, where a large number of relatively simple inference calls fan out across a system in parallel. High core count, high memory bandwidth, deterministic clock speeds without turbo variance. That is a coherent design philosophy even if the acronym is not.
Arm CEO Rene Haas was direct in an interview with Wired: we are now in a new business for Arm, and we are supplying CPUs. Full production is targeted for the second half of 2026. The lab rooms in Austin exist. The chip has been validated. Whether the performance claims survive contact with third-party benchmarks — and whether $15 billion in 2031 revenues materializes — is the work of the next several years. What is not in question is that the relationship between chip designers, cloud providers, and AI labs has reached a level of vertical integration that would have been structurally impossible five years ago. Meta wanted a chip. Arm built it. The rest is execution.