Brett Adcock has a theory about what makes a robot smart. It is not the language model.
Figure AI, the robotics company Adcock founded and runs, has been building humanoid robots for BMW, negotiating commercial deployments, and raising at a $39 billion valuation. It also ended its partnership with OpenAI last year. In a full interview on the Shawn Ryan Show posted to Singjupost on March 30, 2026, Adcock described Figure's internal AI group in competitive terms: "We just found that team we had internally," he said. "We just ran kind of circles around them."
That is a remarkable thing to say about the most well-funded AI lab in the world. Adcock is making a competitive claim — one OpenAI has not confirmed — but it is a revealing window into how the two companies diverged, and into a fundamental split in how the field thinks about embodied intelligence. That split explains the breakup, Figure's $39 billion valuation, and why one company deleted more than 100,000 lines of C++ to start over.
The first theory is LLM-first. Put a powerful language model in a robot, give it enough actuator controls, and the robot reasons its way through physical tasks the way ChatGPT reasons its way through a poem. OpenAI's robotics team, which relaunched in February 2025 and has around 100 data collectors working around the clock in San Francisco, has been building this way. Business Insider reported in January that the lab operates from the same building as OpenAI's finance team and uses Franka robots with custom GELLO controllers for household tasks. The lab has quadrupled in size since its February 2025 launch and is planning a second facility in Richmond, California.
The second theory is physics-first. Start with the body. Collect real-world physical data at scale, teach the robot to fail and recover, and let the neural network learn from what happens when things go wrong. This is Figure's bet.
The knee joint is the clearest illustration. In the same Singjupost interview, Adcock described a resilience test: "We can lose a knee," he said. "Lose full comms of the knee. We can stiffen the joint and we can limp off to the hospital." The robot does not stop. It does not call for help. It figures out how to keep moving with what it has left. Adcock's claim stands on its own terms. What it illustrates is the kind of thing you can only learn from real robots, in real environments, making real mistakes. That is not a language model trick.
The partnership ended because OpenAI decided it wanted to do robotics internally. Adcock's account of how it happened is direct: Sam Altman called to say OpenAI was going to build its own robotics system. "This is over," Adcock recalled. "Just get out of here." He says he fired OpenAI on that call.
An OpenAI staff member, Tao Xu, reposted the interview clip on X and wrote that it was not true. Tao Xu's denial is on the record. Adcock's account of the call is on the record. The call itself is not independently confirmed.
What is less disputed is what the partnership actually produced. According to Adcock, "there was some good brand association there, but beyond that, there wasn't much." Getting OpenAI's team into an office for robot demonstrations was, by his account, a coordination problem the partnership never solved.
Figure's Helix 02 system — the neural network stack running its robots — is not a refinement of an earlier system. It is a rewrite. As Adcock has described, the company deleted more than 100,000 lines of C++ code to replace the old architecture entirely. That is not a patch. That is a bet — from the outside, impossible to independently measure — that the previous approach was not going to get where they needed to go.
The Helix 02 stack is what Figure calls its "general-purpose" robotics system — the thing that lets a single model handle tasks it was never specifically trained on, in environments it has not seen before. What the outcome will be is something only time and deployment will clarify.
The company's Figure 02 robots have been running at BMW's plant in South Carolina since late 2024, loading more than 90,000 parts across 1,250 hours of runtime, contributing to the production of more than 30,000 X3 vehicles, and accumulating more than 1.2 million robot steps, according to Figure's own production data.
Those are real numbers in a real factory. They are also a long way from the language model that OpenAI has been building toward.
The gap between these two visions is not academic. It is a $39 billion question. Figure's valuation reflects real deployment, real manufacturing velocity, and a bet that the future of robotics is data from the physical world — not reasoning from a language model that has never lost a knee joint.
OpenAI's robotics lab is bigger than the "dust collector" framing suggests. The 100 data collectors, the 24/7 operations, the Richmond expansion — this is a serious investment in the LLM-first approach. Both theories are being bet on at scale. The Shawn Ryan Show interview does not resolve which is right. But it explains, more plainly than either company has before, why the split happened and what each side thinks it is building toward.