When a neural network controls a motor, the regulators need to know it won't do something unexpected at 70 miles per hour. That question, how to certify AI as safe in a safety-critical, real-time system, is the gap a new standard and a chipmaker whitepaper are trying to close.
ISO 26262, the dominant functional safety standard for road vehicles, was designed around deterministic software: code that does the same thing every time. ISO 21448 extended it to cover the "expected functionality" of advanced driver assistance systems, or ADAS. But neither was built for a neural network that can behave differently across inputs it was never trained on, or that lacks the traceability traditional software gives you for free. According to a joint whitepaper from Fraunhofer IKS and Infineon Technologies, these existing standards do not fully address AI-specific properties like robustness, resilience, and transparency.
The chip in question is Infineon's AURIX TC4x, the latest generation of the company's flagship automotive microcontroller line. The whitepaper uses an AI-Based Motor Control Unit as its illustrative use case, not a production system, but a reference architecture showing how an AI function could run on silicon that also handles safety-critical tasks like torque vectoring or battery management. The AURIX TC4x is positioned as the platform where that integration is feasible. The whitepaper is Infineon marketing with Fraunhofer branding. The document is gated behind a form requiring a phone number, and the chip being sold is the chip the whitepaper is designed to sell.
ISO/PAS 8800, published by the International Organization for Standardization in December 2024, is the first global standard explicitly addressing AI safety in road vehicles. It attempts to fill the gap by introducing AI-specific safety lifecycle requirements, covering things like out-of-distribution detection at runtime, uncertainty quantification, and the development-time measures needed to ensure a neural network behaves predictably in a safety-critical context. It does not specify which chip runs the AI. It specifies what the AI has to do, and how you prove it.
On June 24, 2025, Geely Auto became the first global automaker to receive ISO/PAS 8800 certification, awarded by SGS-TUV Saar and accredited by Germany's DAkkS. Geely's press release frames the certification as a milestone in automotive AI safety. That framing is accurate as far as it goes, but it does not say much about what the standard actually requires or whether the certification process validated anything beyond Geely's documentation and internal processes. Certification means a third party audited Geely's workflow against the standard's requirements. It does not mean a neural network passed a benchmark test, or that any AI system has operated without incident.
The standard itself is still young. ISO/PAS 8800 was published in late 2024; the certification of a single automaker six months later does not establish a track record. What it establishes is that the paperwork is possible. Whether the underlying technology, AI running on embedded hardware making real-time decisions that affect vehicle dynamics, is actually trustworthy by the standard's own definition remains an open question the whitepaper describes but does not resolve.
Geely's scale gives the certification some weight. The company sold over 2.17 million vehicles in 2024, a 32 percent year-on-year increase, holding 1,562 patents in vehicle safety technologies, according to figures cited in its announcement. If ISO/PAS 8800 becomes a regulatory requirement in major markets, the EU has been moving toward mandatory AI safety frameworks for vehicles, Geely's head start is a competitive asset. For everyone else, the question is whether their AI functions can be certified at all, and whether the hardware they run on is fast enough, reliable enough, and bounded enough to satisfy a standard that still lacks the interpretation guides and conformance tests that gave ISO 26262 its teeth after two decades of use.
The Fraunhofer IKS and Infineon whitepaper offers a technical roadmap. The Geely certification offers a data point. Neither offers a verdict. The standard exists because the problem is real: AI is moving into safety-critical embedded systems faster than the frameworks designed to keep those systems from harming people. What the standard does not yet have is a demonstrated path from paper requirements to shipped hardware that performs as specified, in the field, at scale, under the conditions real driving involves.
That is the work that comes next, and it will take longer than the certification process.