Two Governments, Two Robots
The machines are coming faster than the rulebooks. On one side of the Pacific, the United States Senate is racing to ban Chinese humanoid robots from federal property. On the other, China is publishing its first comprehensive technical standard for the very same machines, and the two efforts are running directly into each other.
Senators Tom Cotton and Chuck Schumer introduced the American Security Robotics Act on March 26, a rare bipartisan pairing that would prohibit federal agencies from buying or operating unmanned ground vehicles manufactured by adversarial nations, primarily China. The bill covers humanoid robots, autonomous patrol systems, and any robotic system that could transmit data back to a foreign adversary or be remotely controlled from abroad. Representative Elise Stefanik introduced a companion measure in the House the same day. The lawmakers' stated concern: Chinese robots could be used for espionage, sending sensitive data back to Beijing or accepting remote commands without the knowledge of their operators.
"Robots made by Communist China threaten Arkansans' privacy and our national security," Cotton said in a statement. Schumer called it "their standard playbook — this time in robotics — trying to flood the U.S. market with their technology."
The security argument is not hypothetical. A group of lawmakers separately urged the Pentagon last year to add Unitree, the Chinese robotics firm whose low-cost humanoid and quadruped robots have become staples in university research labs worldwide, to a list of companies that aid China's military. Unitree has denied military ties.
Here is the part the bill does not address: the same Chinese companies now targeted for exclusion from U.S. government procurement are racing to build robots that the world does not yet know how to regulate.
China's Ministry of Industry and Information Technology published the country's first national standard system for humanoid robots and embodied AI in late February 2026. The framework, developed by a technical committee of over 120 researchers, executives, and policymakers from leading Chinese robotics firms and research institutes, covers six domains: foundational standards, neuromorphic computing, limbs and components, system integration, application scenarios, and safety and ethics. It is the most comprehensive domestic regulatory effort aimed at humanoid robots to date, and it arrives as Chinese manufacturers are producing the machines at a scale that would have seemed implausible three years ago.
Agibot announced on March 30 that it had produced its 10,000th humanoid robot, scaling from 5,000 to 10,000 units in three months. UBTech plans to produce 5,000 units in 2026 and 10,000 in 2027. According to China's MIIT, more than 140 domestic manufacturers released over 330 different humanoid robot models in 2025 — China's self-described first year of mass production. These robots are already appearing in logistics centers, factories, and, in at least one Shanghai McDonald's, as server replacements.
The standard system attempts to impose order on that explosion. Its safety framework operates on three levels. Physical safety mandates cover structural integrity, emergency stop mechanisms, thermal management for batteries, and force limiting, so a robot arm cannot crush a human finger without the hardware sensing and stopping it. Behavioral safety requires predictable failure modes: if a robot loses contact with its control system, it must default to a safe state, freezing or slowly lowering its arms, rather than continuing to operate unpredictably. Ethical and operational safety govern when a robot can make autonomous decisions versus when a human must intervene.
The standard also borrows explicitly from autonomous vehicle grading logic. The Beijing Humanoid Robot Innovation Center published a Humanoid Robot Intelligence Grading standard in May 2025 (T/CIE 298-2025), the first of its kind anywhere, using a four-dimension, five-level framework — Perception and Cognition, Decision and Learning, Execution and Performance, Collaboration and Interaction — that maps to how autonomous a machine is allowed to be. L1 is pre-programmed and static. L5 is full independence in any environment.
The standard's architects acknowledge its limits. Peng Zhihui, co-founder of Agibot and a deputy director of the standardization committee, noted that in industrial settings, roughly 80 percent of tasks where humans outperform traditional automation involve tactile sensing, and the technology remains insufficiently standardized. A robot may know it should not crush a human hand. It may not know it is doing so until the damage is done.
Wang Xingxing, founder and CEO of Unitree Robotics, put the case for standardization plainly: "To enable humanoid robots to genuinely work, particularly on long-sequence tasks, industry-wide standards are absolutely essential."
That case is not hard to follow. The harder question is whether the American Security Robotics Act helps or hinders the same goal.
The bill would block federal agencies from buying Chinese humanoid robots, but it would also prohibit the use of federal funds in connection with those robots, language that could extend to U.S. universities relying on federal grants. The Unitree G1, a low-cost humanoid platform retailing for roughly $16,000, has become a standard research tool in university robotics labs precisely because it is cheap and relatively capable. If federal grant money cannot be used to buy it, some research programs will have to choose between the grant and the robot.
There is a deeper problem. The U.S. has no comparable domestic standard for humanoid robots. There is no federal safety framework, no grading system, no certification process. The American Security Robotics Act tells the government what it cannot buy. It says nothing about what any robot, American or otherwise, must meet before it operates in proximity to people.
China's standard system is not altruistic. It is also a strategic document. As a report from the Netherlands Institute of International Relations notes, China has shifted since 2018 from taking international standards to making them, using programs like China Standards 2035 to embed domestic technical specifications into global supply chains. A country that writes the rulebook often wins the race even when the rules are published openly, because compliance is expensive and local expertise is an advantage.
The U.S. could respond by writing its own rules. Or it could continue running two parallel tracks: banning Chinese robots on security grounds while leaving the safety and performance of everything else, domestic robots included, to market forces and the occasional liability lawsuit.
The machines are coming. The rulebooks are being written. The question is whether the U.S. is writing one or just circling the exits.