Arm unveiled its own CPU on March 24 — the Arm AGI CPU, with Meta as lead customer — and the stock jumped 16 percent in a day, according to CNBC. The performance claims are real: 136 Neoverse V3 cores on TSMC's 3nm process, 300 watts, no simultaneous multithreading, and a 36kW air-cooled rack design that Arm says delivers more than twice the performance per rack compared to the latest x86 systems. That's not a roadmap. That's silicon you can order.
But the more interesting question isn't whether the chip is good. It's whether Arm can hold its business together while shipping it.
For 35 years, Arm has been the quiet middleman of the semiconductor industry, licensing CPU designs to Apple, Qualcomm, Amazon, and a hundred other companies that then built their own chips. Arm never competed with its customers. It made money on every chip shipped regardless of who built it. Neutrality was the product.
The Arm AGI CPU changes that. Arm designed it, and Meta is co-developing it to optimize infrastructure that will draw gigawatts of power, according to Arm's announcement. Arm spent $71 million and 18 months building three new lab rooms at its Austin, Texas campus, where a team of roughly 1,000 people now runs the operation. They are not starting from zero: Arm Neoverse cores already exceed 1 billion deployed across cloud data centers, representing 40 percent of that market as of early 2026.
The launch partners are Meta, OpenAI, Cerebras, Cloudflare, F5, Positron, Rebellions, SAP, and SK Telecom, according to Arm's announcement. Cerebras builds wafer-scale AI accelerators. OpenAI is training at a scale that makes most hyperscalers look modest. Meta is spending $115 billion to $135 billion on AI infrastructure in 2026 alone. These are not design-win customers in a traditional sense. They are co-developers with skin in the architecture.
The chip itself is engineered for deterministic performance in agentic AI workloads, which require sustained high-throughput inference rather than bursty training jobs. The design abandons simultaneous multithreading: one thread per core gives more predictable scaling under load. The memory subsystem runs 12 channels of DDR5 at up to 8,800 MT/s, with 825 GB/s of aggregate bandwidth. For context: a standard x86 server rack maxes out around 40 kilowatts and requires liquid cooling to approach that figure. Arm's air-cooled configuration fits 30 blades and 8,160 cores into a 36kW rack, according to Arm's announcement. The liquid-cooled Supermicro design for 200kW holds 336 CPUs and over 45,000 cores. These are rack-scale systems intended to replace GPU clusters for inference workloads.
That is the pitch, and it is a real one.
The counterargument is straightforward: if Arm is selling its own CPU, why would Apple, Qualcomm, or Amazon continue paying royalties on designs Arm is now competing against directly? Arm projects the AI datacenter silicon market will reach $1 trillion by 2030 and wants $100 billion of it. That is a large number attached to a company that just became a direct competitor to every major customer it has.
Some of those customers are already building custom silicon. Amazon's Graviton, Apple's A-series and M-series, Google's Axion, Qualcomm's Snapdragon X all use Arm architecture. Arm's licensable model made them possible. Now Arm is selling a finished product into the same data centers those chips are targeting. Arm says it wants the chip available to the whole world and is not trying to build traditional sales channels to sell chips. The company that said that also spent $71 million and 18 months building a chip lab with a thousand engineers. Those statements are in tension.
The market bought the narrative: Arm stock jumped 16 percent after the announcement. Whether that trade works depends entirely on whether the licensees decide Arm's neutrality is still worth paying for, and whether the AGI CPU is good enough to make that question moot. Some customers have answered yes. Others are still thinking.