There is a waiting list to build an AI chip, and if you are not Nvidia, Broadcom, Google, or AMD, you are probably not getting in this year.
A Morgan Stanley analysis reported by 36Kr this week shows just how locked that list is: Nvidia alone is projected to need 595,000 chip packaging wafers in 2026, representing roughly 60 percent of all global demand. The four largest customers — Nvidia, Broadcom, Google, and AMD — have together locked more than 85 percent of TSMC's total advanced packaging capacity through at least mid-2026, leaving less than 15 percent for everyone else. TSMC, the Taiwanese manufacturer that produces the vast majority of the world's most advanced semiconductors, is the only game in town for the packaging technology AI chips require, and the queue has become the market.
The companies that make the memory those chips need have done the same math. Data centers are consuming 70 percent of all memory chips produced worldwide in 2026, according to Tech Insider, up from 20 to 30 percent in 2022. Standard memory modules used in phones and laptops have risen 30 to 40 percent in price year over year, and global DRAM production is on track to grow just 16 percent this year — below the 20 to 30 percent that characterized previous upcycles. The five largest hyperscalers have collectively committed more than $660 billion in capital expenditures for 2026, according to Manufacturing Dive, which gives them priority at every supplier they touch. High Bandwidth Memory — the specialized chips stacked directly next to processors to move data at the speed AI training requires — now accounts for nearly 20 percent of all DRAM wafer output, up from 19 percent in 2025. Manufacturers are earning 60 to 70 percent gross margins on HBM, and have no incentive to redirect capacity toward phones or PCs. Micron noted that its 2026 capacity is more than sold out.
Qualcomm, the largest supplier of chips for Android phones, reported $12.25 billion in revenue against a consensus expectation of $11.2 billion — a beat of roughly $1 billion — yet still cut its near-term handset forecast, blaming industry-wide memory constraints. The memory shortage is not letting even successful companies off the hook.
Mid-tier AI chip makers who cannot secure TSMC's packaging slots have limited workarounds. Intel's EMIB technology can approximate some of the performance benefits but imposes a ceiling that makes it poorly suited to AI training at scale. Samsung's X-Cube, using its 3D SRAM approach, faces similar constraints. Neither offers a path for a startup to build a competitive AI accelerator at the performance levels hyperscalers demand. TSMC is expanding capacity, but bringing advanced packaging lines online takes years. Analysts do not expect meaningful relief before 2027.
The collateral damage extends beyond phones. Bromine, a chemical essential for semiconductor etching and flame retardancy, has surged to $12,000 per metric ton. Helium prices doubled after strikes at Qatar's Ras Laffan plant in March, which accounts for roughly a third of global LNG supply. These are smaller inputs, but they compound an already tight supply chain at every node.
For any company building a product that competes with AI data centers for memory or manufacturing capacity, the situation is a structural constraint, not a temporary cycle. The hyperscalers have bought the queue. The waiting list is the market now.