EPIC Microsystems raised $21 million in a Series A round led by Seligman Ventures, with participation from Intel Capital and others. The March 25 announcement was light on technical specifics, but the San Jose startup is betting that a power architecture borrowed from the mobile world can solve a problem that conventional approaches are running out of room to fix: delivering electricity to AI accelerators inside increasingly dense data center racks, without losing half the power to heat in the process.
The core claim is that EPIC's hybrid switched-capacitor (HSC) architecture replaces bulky inductors — the magnetic components that have handled DC-DC voltage conversion in server racks for decades — with a capacitive circuit that's denser and generates less waste heat. The company's founding team previously applied switched-capacitor design to fast-charging circuits in smartphones and laptops, work EPIC says reached "hundreds of millions of devices." Whether that experience transfers directly to megawatt-scale rack power is the open question.
The pressure EPIC is stepping into is real and well-documented. As AI compute density climbs, rack power demands are approaching one megawatt per rack — a threshold that exposes the physical limits of the 54 VDC power distribution architecture that data centers have used for years. NVIDIA has published the math directly: at 54 VDC in a single 1 MW rack, copper busbars alone can weigh 200 kilograms. At 800 VDC — the infrastructure shift NVIDIA is pushing for 2027 deployments — the same power transmits through thinner conductors, requiring roughly 45 percent less copper and losing less to resistive heating along the way.
That 800 VDC transition is where EPIC's technology could sit. The company's HSC design is described as silicon-proven, meaning it has been fabricated and tested on actual silicon rather than just simulated. The press release claims the architecture delivers gains in current density, power efficiency, and thermal performance simultaneously — a combination that matters because hyperscalers currently face compounding tradeoffs: push more current through a rack and you either run out of physical space for busbars, exceed thermal limits, or both.
Seligman Ventures led the round, putting the firm behind its second disclosed semiconductor bet in under two months. The firm launched as a $500 million venture vehicle in February 2026, per reporting by Fortune, and brings Umesh Padval — a semiconductor veteran with over 35 years of operating and board experience — onto EPIC's board as part of the deal. Intel Capital's participation adds institutional weight, and Lip-Bu Tan, Intel's chief executive, is listed as a seed investor, calling power delivery innovation "as defining as advances in AI compute silicon" in the press release. That quote is the kind of thing a CEO says when they believe something and have skin in the game; it is also, notably, the kind of thing a CEO says when they want to legitimize a bet they already made quietly.
The competitive context is important. EPIC is not the only startup trying to solve on-rack power delivery at AI scale. PowerLattice raised $25 million in November 2025 with backing from former Intel chief Pat Gelsinger, using a chiplet approach with miniaturized magnetic inductors — a bet on improving the inductor rather than replacing it. Vertical Semiconductor, an MIT spinout, took $11 million in October 2025 to commercialize vertical gallium nitride (GaN) transistors for the same problem. And the established power semiconductor ecosystem — Texas Instruments, Navitas, Infineon, Renesas, STMicroelectronics, Analog Devices — is actively working the 800 VDC transition on the vendor side, per NVIDIA's published partner list for its HVDC architecture.
EPIC's differentiator is the switched-capacitor approach rather than GaN or improved magnetics. Whether that holds at commercial scale, under real thermal load, inside a hyperscaler's preferred rack architecture, is unverified outside the company's own testing. The press release does not name any hyperscale customers or deployments. "Silicon-proven" is a meaningful step up from "roadmap," but it is not "in production."
The $21 million will presumably pay for the gap between proven silicon and a product a data center operator will actually buy. In power semiconductors, that gap is not small. Designs that work on a test chip under controlled conditions routinely fail to translate to volume production at acceptable yields. EPIC's team has done this before in consumer hardware, where failure means a phone charges slowly. At data center scale, failure means a rack that runs hot or shuts down under load — a different consequence profile.
What to watch: whether EPIC names a Tier 1 hyperscale customer in the next six months, and whether its silicon moves from proof-of-concept to a qualification program at any of the eight major cloud providers. The architecture is interesting. The execution risk is real. The funding is not large enough to absorb a long sales cycle with enterprise customers who make hardware buyers wait 18 months to qualify a new component.
This story slots into a broader pattern — AI compute hitting the power wall. We covered Groq's $20 billion inference deal and Balaji Srinivasan's energy crisis warning in the past 18 hours. The common thread is that the economic case for AI expansion is running into a physical constraint that cannot be solved by writing a larger check. It has to be solved in electrons.
The round was announced March 25, 2026.