Peter Steinberger built OpenClaw to organize his own digital life. Beijing is still trying to figure out what to do about that.
Six months after the Austrian iOS developer launched the agent framework as a personal project, China has issued two official security warnings about it, banned it from government devices, and watched its largest cloud providers — Alibaba, Tencent, Baidu — race to offer one-click deployment anyway. Jensen Huang called it the most successful open-source project in history. OpenAI hired Steinberger to build the next generation of personal agents. Baidu is threading it into their main smartphone app. None of that was the plan.
The OpenClaw story is not really about OpenClaw. It's about who builds foundational infrastructure in the age of the autonomous agent — and what that reveals about risk tolerance inside the companies with the resources to move faster. Google has DeepMind. Microsoft has Inflection. Meta has FAIR. Apple shipped a phone. Steinberger shipped a framework that lets anyone connect an AI assistant to their existing workflow, MIT-licensed and open to any model, and the world ran with it. His retrospective, on his personal blog: "I wanted to just show people I've been into the future."
Big companies would have worried too much about what could go wrong. Steinberger shipped it and found out.
The China arc is the sharpest illustration of the contradiction. China's Ministry of Industry and Information Technology warned on February 5 that improper configuration could expose users to data breaches. China's national CERT followed on March 11 with a sharper finding: OpenClaw had "extremely weak default security configuration," was vulnerable to instructions embedded in web pages, and had already suffered vulnerabilities leading to credential theft. Government agencies and state-owned banks were reportedly banned from installing it on official devices. A wave of exposed OpenClaw instances on the public internet followed — a reflection of how fast it spread before anyone wrote a hardening guide.
Sixteen days after that ban, StepClaw — an AI assistant built on OpenClaw — was available for installation at the Global Developer Pioneers Summit in Shanghai on March 27. The gap between a government security warning and a commercial deployment at a major developer event is not unusual in China's tech landscape. Regulatory signals and commercial interests frequently move on different tracks. But it illustrates the pattern that defines how agent infrastructure spreads: the tooling moves faster than the policy, and whoever deploys first sets the defaults for everyone else.
The OpenAI hire, confirmed by Sam Altman in February, adds another layer to the governance question. OpenClaw will live in an independent foundation — Steinberger's stated condition. But he is now inside a company with strong commercial interests in how agents develop. Nvidia's framing of OpenClaw as "the operating system for personal AI" at GTC, analogous to Mac and Windows for personal computing, suggests a platform ambition that may not be fully compatible with foundation governance. Nvidia responded to the security concerns by announcing NemoClaw, an enterprise security layer for OpenClaw running on their stack. That move — open-source sprawl, then enterprise hardening on top — is the same pattern Linux and Docker followed. The difference is speed.
Rokid, a Chinese AI glasses company, points toward where this goes next. OpenClaw community developers are building integrations on the Rokid Glasses Developer Kit, which already supports native multi-model access including Google Gemini and OpenAI's ChatGPT. The glasses have cameras, directional speakers, and noise-canceling microphones. They are the physical layer for Steinberger's vision of a general agent that operates not on your desktop but in front of your eyes.
Steinberger's timeline, offered at ClawCon Tokyo: 2023-2024 was the year of ChatGPT, 2025 was the year of the coding agent, 2026 will be the year of the general agent. Whether that prediction holds depends on questions the technology cannot answer: whether the security model matures before the next wave of exposures, whether foundation governance holds against commercial pressure, and whether Beijing's posture shifts as state cloud providers continue building products on top of a tool the state has warned against.
The lobster, as Steinberger put it, is taking over the world. The question is who controls what comes next.