Chip Node Names Lose Meaning as Industry Turns to Packaging
Five chip stories landed this week. Together they reveal an industry that stopped competing on transistor shrinkage — and started competing on everything else.

The Chip Industrys Naming Convention Died This Week
Five events landed in the chip business this week, and individually theyd each be a story. Together they add up to something quieter and stranger: an industry that spent thirty years organizing itself around a metric — the transistor-shrinking node — that stopped meaning what it claimed around the time Clinton took office.
Start with the naming. TSMC announced this week itll ship its A14 node in 2028, followed by an A13 optical shrink in 2029, per Semiconductor Engineering. The wire, in what may be the most honest sentence published about leading-edge chipmaking this year, noted that its not clear what those numbers actually mean anymore. That sentence deserves to be read twice. Node names stopped corresponding to physical transistor dimensions in the late 1990s — Renesas traces the divergence to 1994; WikiChip says it became entirely marketing by 2017. Intels 14A and TSMC A14 are not measurements. They are brands.
This weeks other announcements prove the point by going sideways. Marvell acquired Polariton Technologies, a Swiss plasmonics company, to build optical connections inside chips rather than shrink the transistors. Rambus announced a SOCAMM2 memory module chipset — not a new node, a new way of packaging memory next to processors. Advantest released a digital test card for AI chips that runs at 5 Gbps per pin. Movellus launched telemetry that monitors power delivery at nanosecond resolution inside a chip. None of these are transistor shrinks. All of them are real engineering.
The first customer on Intels 14A node doesnt change the naming problem — it illustrates it. Tesla said this week itll use Intels 14A process for its Terafab AI chip complex in Austin, making Intel Foundry first major external customer for the new node. Intel shares rose 3.6% after the announcement. The analysts were right to call it a win: Lip Bu Tan had said Intel would exit the foundry business without an external customer, and now it has one, named Elon. Whether Teslas bet on Intel is a bet on Intels technology or a negotiating tactic with TSMC is a question nobody answered cleanly. The $5 to $13 trillion Bernstein estimates would cost to build one terawatt of compute capacity suggests Terafab is less a factory plan than a capital expenditure thought experiment.
Cerebras Systems filed its S-1 last week, and the filing is a document worth reading if youve ever wondered what a company looks like when its simultaneously the most architecturally interesting and operationally fragile thing in its market. The wafer-scale engine is genuinely different — a single chip the size of a wafer, with 21 petabytes-per-second memory bandwidth that Nvidia Blackwell cant match on paper. The $510 million in 2025 revenue is 76% year-over-year growth. The $20 billion Master Relationship Agreement with OpenAI for 750 megawatts of inference compute — expandable to 2 gigawatts — is either a transformative commercial partnership or the worlds most expensive hosting contract, depending on whether Cerebras can actually deliver megawatt-scale infrastructure its never operated.
What the S-1 reveals is that 86% of Cerebras 2025 revenue came from two related entities in the United Arab Emirates: the Mohamed bin Zayed University of Artificial Intelligence at 62%, and Group 42 at 24%. US-billed revenue actually shrank 34% year-over-year. The $237.8 million in GAAP profit that looks like operational vindication is a $363.3 million one-time gain from extinguishing a forward contract liability related to G42 — an accounting event, not a business trajectory. Strip that out and the non-GAAP loss widened to $75.7 million from $21.8 million. The company that wants to be the inference infrastructure for the AI age is still burning cash, still concentrated in a single geography, and still transforming from a hardware shipper into a hyperscale operator its never been.
Samsung Pyeongtaek complex saw 40,000 union workers rally Thursday for higher bonuses, threatening an 18-day walkout starting May 21 that they estimate costs the company a trillion won per day. Samsung forecast record first-quarter operating profit of 57.2 trillion won, about $38.6 billion — and the union position is that the workers who generated that record deserve a larger cut. Timing for the AI memory boom couldnt be worse. SK Hynix, cross-town rival, posted all-time high quarterly revenue the same day, citing insatiable demand for HBM chips from data center operators. Whoever loses that labor negotiation — or when — is going to be a real constraint on AI memory supply at precisely the wrong moment.
The geopolitically-driven supply crunch is quieter but potentially more durable. The Strait of Hormuz has been effectively closed since early March, cutting off naphtha shipments from the Middle East. Naphtha is a feedstock for the photoresist chemicals used in chip manufacturing, particularly at the EUV wavelengths required for the most advanced nodes. Samsung and SK Hynix are most exposed, per South Korean media reports; their Japanese photoresist suppliers are already warning of raw material disruptions. SK Hynix says its diversified its sourcing and has sufficient inventory. That may be true today. It is not a structural answer to a supply chain that routes through a strait that can close.
None of this means the chip industry is in trouble. TSMC A14 roadmap is real; its Arizona packaging plant by 2029 is real; the demand for AI compute is not abating. But the weeks announcements, taken together, suggest an industry that has migrated its center of gravity. The transistor — the object that defined semiconductor competition for fifty years — is now a solved problem. What the industry is actually competing on now is harder to name: integration density, packaging architecture, power delivery, memory proximity, supply chain geography. The node number was a convenient shorthand. It is no longer available.
Apples succession planning landed quietly in the middle of all this. Tim Cook becomes executive chairman September 1; John Ternus, a hardware engineer by training, becomes CEO. Johny Srouji, who architected Apples custom silicon, becomes Chief Hardware Officer immediately. Cooks record — growing Apple from $350 billion to $4 trillion in market cap — is not primarily a hardware story. It is a supply chain and ecosystem story. The new leadership structure suggests Apple thinks the next phase is hardware again, or still. Srouji expanded role, overseeing both hardware engineering and hardware technologies, is the clearest signal in the weeks news about where Applies priorities sit. When an engineer gets a new title that includes hardware and officer in the same breath, pay attention to that.
The chip industrys next chapter is not being written in nanometers. It is being written in packaging lines, photoresist supply routes, labor contracts in South Korea, and sovereign AI compute agreements in Abu Dhabi. The transistor will keep shrinking. It stopped being the point some time ago.





