AI can now design a chip in twelve hours. The harder problem is proving it works.
That is the actual insight from Verkor.io's Design Conductor — a harness that constrains large language models and steers them through chip design tasks without drifting into engineering dead ends. When IEEE Spectrum reported on the system this week, the headline was the twelve-hour milestone. The more consequential detail was what Design Conductor did not solve: verification, the expensive and labor-intensive process of proving a chip design actually functions before it goes to a fabrication plant.
Chip design has two phases: architecture and verification. Architecture is where you decide what the chip does and how it does it. Verification is where you prove it actually works — every instruction, every edge case, every timing constraint, checked against reality. Industry data holds that verification consumes more than half the development budget and most of the calendar. "It is widely understood that bringing a new leading-edge design to market costs well over $400M and takes 18–36 months even with an engineering team numbering in the hundreds," Verkor's paper notes.
Design Conductor solved the first phase. It did not touch the second.
"The chip has not been physically produced," IEEE Spectrum reported, citing Verkor directly. VerCore was verified in simulation with Spike, the open-source RISC-V reference simulator, and laid out using the ASAP7 process design kit — an academic 7nm standard, not a commercial fabrication node. No fab turned silicon. The GDSII file exists; the board it would sit on does not.
Design Conductor is also not itself an AI model. It is a harness — a system of constraints and tool calls that sits around large language models and keeps them from veering into dead ends. Which they do. "The agent made a mistake in timing," IEEE Spectrum described. "The model did not recognize the cause and made broad changes while hunting for the fix. It did eventually find a fix, but only after reaching many dead ends." The paper's own language is more clinical: "many tens of billions of tokens consumed" across the twelve-hour run. What that compute costs, or who paid it, is not disclosed.
Synopsys and Cadence, the two dominant electronic design automation incumbents, have had agentic AI tools for some time. Their systems automate individual tasks within the verification and simulation pipeline — a design rule check here, a coverage closure loop there. Useful. Incremental. "These allow chip architects to automate some tasks with AI agents," IEEE Spectrum observed. "Design Conductor is different because it is built to handle chip design from spec to completion with full autonomy."
That claim is worth examining. "Full autonomy" through architecture does not mean autonomy through verification. A human team of five to ten specialists, all experts in different subdomains, is still required per design cycle, the Verkor paper says. The bottleneck has not vanished. It has moved.
When AI automates architecture, the scarce resource in chip development does not disappear — it relocates. Verification is where domain knowledge actually lives in modern semiconductor work: understanding instruction set behavior, corner-case interactions, timing bridges across clock domains. AI can propose chip layout code. A human still has to be confident enough to sign it off for fabrication.
The implication for the industry is counterintuitive. The conventional reading of Verkor's result is that it threatens EDA incumbents — that end-to-end automation displaces point-tool vendors. The more accurate reading may be the opposite. If architecture becomes abundant, verification workflow ownership becomes the competitive moat. Cadence and Synopsys have spent decades accumulating the verification IP, the coverage models, the sign-off correlations that make their tools trusted. That trust is not in design generation. It is in design validation.
What Verkor has actually demonstrated is not that AI can replace chip engineers. It has shown that AI can compress the front end of the design cycle — the speculative, creative phase — from months to hours. The back end remains human-intensive and expensive. Whether that changes depends on whether anyone can build an AI system that is as reliable in verification as it has become in architecture generation. That is a hard problem. It is the interesting problem.
Verkor says it is working with unnamed top-10 fabless semiconductor companies. The design files for VerCore are promised for release by the end of April. Whether the RTL and GDSII hold up to independent inspection — whether the tapeout-ready claim survives scrutiny from engineers not on the team — will tell us whether this is a genuinely new capability or an impressive demonstration of a narrow slice of the problem.
Until then, the honest summary is this: AI learned to design chips. The hard part is proving they work.