Is OpenClaw Really the Next ChatGPT? Why Nvidia's CEO Called This Hot New AI Assistant the Future
--- title: "OpenClaw Caught Nvidia's Eye.

On March 17th, Jensen Huang appeared on CNBC's Mad Money and called OpenClaw "definitely the next ChatGPT." That single line generated more coverage than most product launches get in a year. But the more consequential OpenClaw news broke a day earlier, 1,200 miles north, at Nvidia's GTC conference — and it wasn't a quote.
Nvidia announced NemoClaw, a reference stack for the OpenClaw platform. The press release quoted Huang with a more precise framing: "Mac and Windows are the operating systems for the personal computer. OpenClaw is the operating system for personal AI." That's not marketing copy — it's an infrastructure classification. And it came from the CEO of a company that has spent the last decade building the compute layer for every major AI breakthrough.
What NemoClaw actually does
The CNET coverage focused on the lore — the rename saga, the crypto scammers, the lobster mascot that briefly sprouted a human face. That's entertaining, but it obscures what's actually new about NemoClaw.
NemoClaw installs OpenClaw in a single command. Under the hood, it bundles the NVIDIA OpenShell runtime, which provides an isolated sandbox for autonomous agents. It adds a privacy router so agents can route cloud API calls through a controlled egress point, and policy-based guardrails that enforce what the agent is permitted to do. This is the infrastructure that enterprise deployments have been waiting for — not just "can we run an agent," but "can we run an agent that only touches the data we want it to touch."
Always-on agents need dedicated compute. NemoClaw targets Nvidia's own hardware lineup — RTX PCs, RTX PRO workstations, DGX Station, and the new DGX Spark — but the stack itself is hardware-agnostic. Dell has already announced a NemoClaw-certified workstation. The implication: we're entering the phase where agent infrastructure is being certified, the way network equipment gets certified for enterprise deployments.
The fastest-growing open source project that almost didn't happen
OpenClaw was created by Peter Steinberger, the Austrian developer who sold his PDF SDK company, PSPDFKit, for roughly EUR100 million (approximately $115-120M at the time of the 2021 Insight Partners acquisition). He built OpenClaw as a side project. The core insight was simple: most AI assistants are chat interfaces with no ability to actually do things. OpenClaw is built around the idea that an agent should be able to send a message, run a command, book a calendar slot — across every channel you already use.
The project launched in January 2026 under the name Clawdbot. Within 24 hours it had 9,000 GitHub stars. Within a week, over 60,000. Andrej Karpathy praised it on X. David Sacks, the White House AI and crypto czar, did the same. MacStories called it "the future of personal AI assistants."
Then Anthropic sent a polite email noting that "Clawd" and "Clawdbot" were uncomfortably close to "Claude." Steinberger pivoted to Moltbot — and within seconds, automated bots sniped the @clawdbot handle and posted a crypto wallet address. The Moltbot name triggered its own chaos: an AI-generated mascot redesign produced what the community dubbed "Handsome Molty," a human face grafted onto a lobster body. It became a meme. Meanwhile, a fake $CLAWD token briefly hit a $16 million market cap before collapsing over 90%.
By January 30th, the project had settled on OpenClaw — "Open" for open source, "Claw" for the lobster.
Why this matters for agent infrastructure
The OpenClaw story is really two stories running in parallel. The first is a developer relations case study in how not to handle viral growth: the trademark cascade, the bot chaos, the crypto grifters. The second is a genuine infrastructure story.
OpenClaw's architecture — a gateway that routes messages across 25+ channels (WhatsApp, Telegram, iMessage, Signal, Discord, Slack, and more) and delegates to whichever LLM you choose — is not novel in concept. But the implementation is. The gateway-as-control-plane model, where the assistant lives where you already communicate, sidesteps the adoption friction that killed every previous "personal AI assistant" product. You don't have to open a new app. The agent meets you where you already are.
NemoClaw is Nvidia's bet that this architecture becomes the default for enterprise personal agents. The security and privacy controls in the reference stack — sandbox isolation, policy-based guardrails, privacy routing — are explicitly designed to answer the question that IT departments will ask before approving any agent deployment: "What exactly can this thing do with our data?"
That's not a trivial question. An always-on agent with access to your email, calendar, and messaging apps is a powerful tool — and a powerful attack surface. The infrastructure layer that NemoClaw provides doesn't eliminate that risk, but it creates a policy surface to reason about it. That's new.
The notebook
The convergence is worth noting: a developer-built open-source project hit 60,000 GitHub stars in a week, attracted endorsements from the AI community's most prominent names, and less than two months later, Nvidia announced a dedicated reference stack and reference hardware. The velocity from "side project" to "enterprise infrastructure partner" is faster than anything we've seen since Docker. Whether OpenClaw sustains that trajectory depends on whether the security model actually holds up in production — and whether Steinberger's "open" philosophy survives the inevitable enterprise pressure to close things down.
Sources: Nvidia press release | CNBC Mad Money transcript | CNET | OpenClaw GitHub | Steinberger's X thread on the Moltbot incident

