The United States spent years and considerable diplomatic capital trying to cut China off from the chips that power advanced robotics. NVIDIA's H100 and H200 are effectively blocked from export to Chinese customers. The theory was that without the silicon, China's robotics industry would stall.
The theory is breaking down — not because the chips got through, but because the training software arrived through a different door.
This week, NVIDIA's Newton physics engine reached general availability. It is open source, published on GitHub under the Apache-2.0 license, managed by the Linux Foundation. Anyone can download it, fork it, or build on it. There is no export license required. Per the Linux Foundation's official guidance, software that is "publicly available without restrictions upon its further dissemination" is considered published under the Export Administration Regulations and therefore not subject to them.
Newton is not a toy. It is a GPU-accelerated physics simulation engine co-developed by Google DeepMind and Disney Research, built on NVIDIA's Warp framework and OpenUSD. According to the Linux Foundation, its key features include stable articulated mechanism simulation, hydroelastic contact modeling, and deformable body simulation for cables, cloth, and rubber — the hardest problems in robot manipulation. Getting a robot hand to pick up a wire harness without tangling it has blocked home robot deployment for a decade. Newton's open-source release means any lab in the world can now build on that capability without negotiating a license.
The download numbers suggest the world noticed. NVIDIA's Physical AI Dataset — the corpus used to train robots on synthetic trajectories — has been downloaded 4.8 million times. Cosmos Reason — the reasoning module that plugs into the GR00T robot foundation model — has crossed one million downloads and ranks first on Hugging Face's Physical Reasoning Leaderboard. Cosmos world foundation models have three million downloads. These are not niche research utilities. They are becoming infrastructure.
China's Unitree Robotics is already there. Unitree makes the G1, a 29-degree-of-freedom humanoid robot that competes directly with offerings from Figure AI and Agility Robotics. The company's engineering team maintains a public GitHub repository with official support for NVIDIA Isaac Lab. Their documentation describes a sim-to-real transfer workflow for the G1 using the Newton physics backend — the same workflow available to any researcher with a GPU and an internet connection.
One second of simulation on an NVIDIA RTX 4090 GPU, according to NVIDIA's benchmarks, equates to roughly 27 minutes of real-world robot experience. That compression ratio means a Chinese lab running Isaac Lab on consumer NVIDIA hardware — readily available outside export-controlled channels — can accumulate years of synthetic training data in weeks.
The Export Administration Regulations were written before this model of software distribution existed. Their enforcement mechanism targets manufacturers shipping controlled hardware. They do not cover pip installs. They do not cover Hugging Face model downloads. They do not cover the fork of an Apache-2.0 repo. That is not a loophole — it is a structural gap that the regulations simply never anticipated, because the regulations were designed to control semiconductors, not open-source ecosystems.
The practical consequence is this: a Chinese robotics company can today download the same physics engine, the same world models, and the same robot foundation models as a US startup, train on the same synthetic datasets, and run inference on the same open weights. The hardware advantage that US policy sought to preserve is real but narrowing — and the software layer where the actual competitive differentiation lives is already freely shared.
This does not mean US policy has lost. It means the battleground shifted and nobody updated the playbook. The semiconductor export control framework remains potent for cutting-edge GPU clusters and the data centers that run them. But for the robotics industry specifically, the training stack has become commodity infrastructure. The question policymakers have not yet answered is whether that matters more than the chip restrictions — or whether the next generation of competitive robotics is being built right now, one pip install at a time, outside the scope of any regulation currently on the books.
What is clear is that NVIDIA, Google DeepMind, and Disney Research collectively decided that handing the physics engine to the Linux Foundation was worth more than the competitive moat of keeping it proprietary. TechCrunch reported in January that NVIDIA is explicitly positioning its Isaac platform as the Android of generalist robotics — an open stack that hardware manufacturers build on rather than compete with. That decision is also a quiet bet that US regulators will not be able to touch the training layer of the most consequential robotics race in a generation, no matter how many chips they block.