SRAM has a physics problem, and the timing is inconvenient. The workhorse memory that sits inside every AI chip stopped shrinking in 2022, when the semiconductor industry reached the 3nm node and SRAM bit cells ran out of room, according to Mark LaPedus at Semiconductor Engineering. The problem is not that engineers stopped trying. It is that SRAM's six-transistor design does not compress below a certain size, and the finFET replacement now arriving in advanced fabs makes the cell taller, not shorter.
The numbers are uncomfortable. SRAM occupies 70 to 75 percent of die area in many AI inference chips, RAAAM told Blocks and Files. In some automotive and edge processor designs it exceeds 75 percent. As AI accelerator makers push toward larger on-chip caches to reduce trips to external DRAM, they are running into a wall: the memory that makes up most of their chip cannot scale along with the logic it sits next to.
"We are hitting a physical limit with SRAM," an NXP vice president said in a statement announcing the company's partnership with RAAAM. "Fifty percent or more of SoC die areas can consist of embedded SRAM. In advanced process nodes, SRAM scaling limitations can challenge power and performance," the company said in a press release covered by EE News Europe.
The semiconductor research organization imec quantified the transition problem: moving from finFETs to nanosheets at the 3nm node adds roughly 40nm to the SRAM bit cell height if layout margins stay constant, according to Semiconductor Engineering. Logic shrinks. SRAM gets bigger relative to it. Andy Timin, CEO of Flex Logix, told the publication that his customers at advanced nodes have noticed exactly this. "Our customers who work on advanced nodes are all complaining that the logic is scaling better and faster than the SRAM," he said. "That is unusual, because it is unusual to have cache memories bigger than whole processor."
RAAAM, a deep-tech startup founded in May 2021 by four PhDs from Bar-Ilan University in Israel and EPFL in Switzerland, thinks it has a practical answer. The company has developed Gain-Cell Random Access Memory (GCRAM), which replaces SRAM's six-transistor cell with three transistors, a design that has been in academic literature for years but proved difficult to manufacture reliably, as Blocks and Files reported. RAAAM's version is CMOS-compatible: it fits standard logic fabrication flows without exotic materials or additional mask layers. No capacitor, no trench, no departure from what a foundry already knows how to do.
"We've developed a completely new embedded memory architecture," RAAAM CEO Robert Giterman told Semiconductor Engineering. "This three-transistor cell delivers up to 50 percent area reduction and up to 10X lower power compared to a traditional six-transistor SRAM."
The company has working silicon. RAAAM fabricated a test chip through TSMC at 16nm FinFET showing functional memory meeting projected latency and density targets, and successfully evaluated the design in 5nm FinFET technology, the company said in a press release covered by EE News Europe. A 256-Mb test chip qualification at TSMC's 2nm node is underway, the final checkpoint before mass production. One top-10 TSMC customer has signed as lead licensee under NDA and cannot be named.
The power numbers are real. At 16nm FinFET, RAAAM demonstrated a minimum operating voltage of 450 millivolts versus 700 millivolts for standard SRAM at the same node, EE News Europe reported. The company claims GCRAM consumes approximately 10 percent of SRAM power at comparable density. The trade-off is refresh: because gain cells store charge on a floating gate rather than using bistable latching, they leak and require refresh every roughly 100 clock cycles at 1GHz operation, yielding 97 to 99 percent array availability, RAAAM explained. Giterman calls this acceptable for most cache applications. It is a real limitation; SRAM does not need refresh, but it is manageable.
NXP Semiconductors, the Dutch chip company with deep automotive and industrial customers, led RAAAM's $17.5 million Series A in November 2025, joined by Fusion Fund and industry veterans, TechTime reported. RAAAM has 22 employees operating from Petah Tikva, Israel, and Lausanne, Switzerland, and has received 5.5 million euros from the European Innovation Council, New Electronics reported.
Independent researchers published an open-source validation of the GCRAM architecture in 2025, corroborating density and performance claims, according to a paper posted to arXiv. The paper notes that RAAAM's IP is patented and that the research community does not have open access to the full design.
Now for the uncomfortable second-order story. The SRAM design IP market is dominated by Synopsys, Cadence, Arm, and Rambus, with specialty players including Silvaco, Dolphin Integration, eMemory, and Avalanche, according to SkyQuest Technology Consulting. These companies have spent decades building SRAM compilers, the automated tools that generate optimized SRAM instances for a given process node, bundled into electronic design automation (EDA) reference flows that chip designers use as starting points for every project. If GCRAM becomes a credible drop-in SRAM replacement, those compiler toolchains need to be rebuilt. SRAM IP portfolios that took years to certify become legacy products. The customers buying SRAM IP and SRAM compilers are the same companies now evaluating GCRAM. None of the major SRAM IP vendors have announced a GCRAM licensing agreement or any public response to the competitive threat.
The competitive context matters here. eDRAM, the obvious alternative density approach, failed at scale because its one-transistor-one-capacitor cell requires trench capacitors that break CMOS compatibility, RAAAM explained. Every additional mask layer in a logic fab is a yield risk and a cost adder. GCRAM's drop-in story is the price of entry in a market where AI chip designers are desperate for cache relief.
NXP's strategic intent is not subtle. Automotive and industrial chips are increasingly AI-capable edge devices where die area is money and power is runtime. If GCRAM survives TSMC 2nm qualification, it gives NXP a differentiation lever against competitors still routing SRAM through standard cells. The broader question is whether TSMC's other advanced customers follow NXP's lead, or wait for a second source before committing to a technology swap that touches every cache instance in their chip.
The answer depends on whether GCRAM yields as cleanly as the lab numbers suggest, and whether a second foundry will offer it. A proprietary memory IP locked to one foundry is a risk for fabless chip designers making long-lead architecture bets. RAAAM is working on it.