Sony AI says it built a robot that can beat humans at table tennis. A read of the underlying research says something different.
On April 22, 2026, the EE Times reported that Sony AI had developed a table tennis robot named Ace, capable of reaching amateur human level in competitive play — what the publication called a first for a learned robot agent. Peter Durr, director of Sony AI in Zurich, was quoted discussing the achievement. The story circulated widely.
But the arXiv paper the EE Times links to tells a narrower story. The paper, titled Achieving Human Level Competitive Robot Table Tennis and dated August 7, 2024, lists as authors David B. D Ambrosio, Saminda Abeyruwan, Laura Graesser, Atil Iscen, and colleagues. Every author on the paper carries a Google DeepMind affiliation. No Sony researcher appears in the author list.
The hardware Durr described in the EE Times — an ABB IRB 1100 arm on linear rails, with a perception system tracking ball spin at more than 160 rotations per second — matches the setup described in the DeepMind paper. The performance numbers Sony cited also match: the robot won 45 percent of 29 matches against human opponents of varying skill levels, defeating all beginners, winning 55 percent of matches against intermediate players, and losing every match against tournament-level opponents.
Sony AI did not respond to a request for comment by publication time. Google DeepMind declined to comment.
The gap between Sony AI's public framing and the paper's author list raises a question the EE Times article did not answer: what is the relationship between Sony AI and the DeepMind system?
Three possibilities are plausible on the public record, based on DeepMind's own publication page for the work. Sony AI could be claiming credit for a system its researchers had no hand in building. The two companies could have a licensing or partnership agreement that did not result in Sony researchers appearing on the author list. Or Sony AI could be describing a separate, Sony-led effort that coincidentally matches the DeepMind paper's specifications in every detail.
Each scenario lands differently. If Sony AI is rebadging DeepMind's work without attribution, it would be unusual even by an industry that has occasionally absorbed academic research without fanfare. Corporate labs routinely publish joint work with partner firms, and those collaborations typically appear in the author list. The absence of any Sony-affiliated researcher on a paper that Sony is now citing as the basis for its public claims is a factual discrepancy.
Sony AI does have a robotics program. Durr's group has published work on robot manipulation and fine motor control, including a 2023 table tennis paper with researchers from the University of Tubingen that acknowledges Sony AI funding. That paper describes a different system — a KUKA robot arm, not an ABB arm — and a different institution. It does not resolve the question of who built Ace.
The DeepMind paper's actual contribution is worth understanding on its own terms, separate from the naming dispute. As the paper describes, the system uses a hierarchical control architecture: low-level controllers govern individual skills like forehand placement or spin return, while a high-level controller selects which skill to deploy based on real-time match statistics. Training proceeded in simulation using reinforcement learning, then adapted to the physical world over seven iterative cycles, learning from roughly 14,000 rally balls and 3,000 serves.
The result is what the paper calls the first learned robot agent to reach amateur human level in competitive table tennis. The qualifier matters: the robot lost every match against the most skilled human opponents it faced. Against beginners and intermediate players, it performed better.
Whether this constitutes a meaningful milestone depends on what you compare it to. Table tennis has been a testbed for robot learning research since the 1980s, in part because the sport demands both fast reflexes and strategic adaptation. Systems that can beat casual players have existed for years, as TechCrunch noted when covering the original paper. What DeepMind's paper describes is a learned policy rather than a system hand-coded for specific shots.
The sim-to-real transfer is the technically notable piece. Getting a reinforcement learning policy trained in a physics simulator to work reliably on a physical robot requires solving a set of calibration and robustness problems that have held back robotic learning in real-world environments. The paper describes those problems and how the team addressed them, even if the robot's failure cases against fast incoming shots and heavy spin remain unresolved.
The story Sony AI told about Ace is coherent on its own terms: a robot trained to play table tennis at an amateur level, representing a step toward machines that can operate safely alongside humans in unstructured environments. That framing is defensible regardless of who built the underlying system. What the public record does not yet support is the version where Sony AI's researchers developed the technology the paper describes.
The attribution question is not a technicality. How AI breakthroughs get named, credited, and commercialized shapes who gets investment, who gets published, and who sets the research agenda for the field. The press release was examined. The paper was read. The discrepancy between them is the story.
Reach the reporter at the address listed with this article.