At some point in the design of an analog circuit — a radio receiver, a power management block, the part of a sensor interface that makes sense of the real world — an engineer has to decide how wide to make a transistor. Not a rhetorical question. The width determines current, which determines gain, which determines whether the thing works. And the engineer makes that decision by running SPICE, looking at the output, changing the number, running SPICE again. Repeat for every parameter, across corners, across temperatures, for weeks.
A paper from the University of Florida, published on arXiv in late March as a preprint and not yet peer-reviewed, argues that the ML approach everyone is reaching for — neural networks trained on simulation data — gets this basic task badly wrong. Not just imprecise. Wrong in ways that would send a designer down the wrong road entirely.
The work, by Mohyeu Hussain, David Koblah, Reiner Dizon-Paradis, and Domenic Forte, is the first application of causal inference to analog circuit design, according to the authors arXiv:2603.24618. They tested their framework on three operational amplifier families — OTA, telescopic, and folded cascode — fabricated in TSMC's 65nm node, generating up to 38,000 SPICE simulation samples per circuit. The baseline comparison is a standard multilayer perceptron trained on the same data.
The numbers are not close. Across all three circuits, the causal model reproduces the simulation-based Average Treatment Effect — the expected change in a performance metric when you perturb a parameter — with an average absolute error under 25 percent. The neural network deviates by more than 80 percent, according to the paper. In the folded cascode op-amp, the NN's average deviation reaches 237.7 percent. For the bias current parameter Idc in that same circuit, the neural network predicts the effect is positive when the simulation shows it is strongly negative. It is predicting the wrong direction of causation.
The core issue is what the paper calls confounding-driven bias. In analog circuits, parameters are entangled. Change the width of one transistor and you indirectly affect the currents in neighboring devices. A neural network trained on observational data picks up those correlations and mistakes them for causal relationships. The causal inference framework — which first discovers a directed acyclic graph from the simulation data, then estimates effects using Pearl's do-calculus — separates the actual causal pathways from the spurious ones. It knows that adjusting one parameter actually causes the performance change, not just that they moved together in the training set.
Why does this matter? Because analog design is where the real world meets the chip. The radio frequency front end, the baseband, the power regulation, the sensor interface — all analog. Mixed-signal chips are everywhere: smartphones, medical imaging, radar, satellite communications, defense electronics. The paper notes that analog design tradeoff exploration typically consumes 30 to 40 percent of overall chip design effort, per Semiconductor Engineering's coverage. Automating that process badly is not the same as automating it well.
The practical claim is not that causal AI replaces SPICE. It is that it tells you which knobs to turn before you run the simulation. You still run SPICE. You just run fewer wrong SPICE runs.
Forte, a professor of electrical and computer engineering at the University of Florida and a 10,000-citation researcher in AI-enabled EDA, has been working on this space for years. The paper builds on prior signal-flow-graph causal modeling work by Jiao et al., but applies a more modern hybrid discovery pipeline — constraint-based skeleton followed by score-based refinement — across multiple circuit topologies and larger sample sets than prior efforts.
There are limits. The work is benchmarked on op-amps in a single TSMC process node. Whether the approach generalizes to more complex analog blocks — switched-capacitor circuits, RF building blocks, data converters — is an open question. The paper also does not address what happens when the causal graph itself is wrong, or when the simulation model is imperfect. The framework inherits the fidelity of its training data.
Those caveats are fair. The neural network's performance on this task, however, is not a fair comparison to hold against causal methods — it is a cautionary tale about applying the default ML approach to physics-dominated, highly correlated systems. The paper's conclusion is restrained: causal AI provides higher accuracy and explainability, and points toward more efficient AMS design automation. That is the right framing.
The paper is supported by Semiconductor Research Corporation Task 3160.037. No commercial interest is disclosed.
The question worth sitting with: if a neural network cannot reliably tell you which direction a bias current push/pull affects gain in a standard op-amp topology, what does that say about using neural networks for more complex analog tasks — timing closure, substrate noise coupling, PVT corners — before the causal structure is understood? The answer is not nothing.