The Robot Skill Problem Was Solved With Math, Not AI
Researchers at EPFL developed Kinematic Intelligence, a framework that transfers robotic skills between different machines without retraining by using pure mathematics instead of machine learning. The system classifies robots into six categories based on their singularity…

When a factory swaps one robot arm for a newer model, the skill learned on the old machine does not come with it. The joint arrangements are different, the movement limits are different, the singularities — the mathematical danger zones where a robot temporarily loses control of its own motion — appear in different places. Everything has to be retrained from scratch.
Researchers at EPFL's Learning Algorithms and Systems Laboratory say they have found a way to fix that. In a paper published April 15 in Science Robotics, the team describes a framework called Kinematic Intelligence that lets a single demonstrated skill run safely on a completely different robot without retraining. The twist: they built it without machine learning. No neural networks, no training data, no risk of a system producing something incoherent.
We wanted certainty, not probabilities, said Sthithpragya Gupta, a co-first author and PhD student in the lab.
The core problem is singularities. When a robot's joints align in certain ways during a task, the mathematics governing its motion break down. A joint may try to spin at infinite speed to compensate, causing a sudden, unsafe movement. The condition is analogous to locking your elbows at full extension while pushing a heavy object: the arms lose side-to-side control for a moment. Different robots have different singularity topologies because their physical structures differ.
Gupta's team gave each robot a deep mathematical model of its own limits: where its singularities lie, how far its joints can actually move, which regions of its movement space are safe. They classified three-revolute robots — the foundational building blocks of most commercial arms — into six categories based on the topology of their aspects, the feasible regions carved out by joint limits and singularities. Knowing which category a robot falls into gives you a complete map of its danger zones.
The team validated the approach on three commercially available machines with very different constraints: a 6-DoF Duatic DynaArm with tight joint limits, a 7-DoF KUKA LWR IIWA 7 with moderate limits, and a 7-DoF Neura Robotics Maira M with much more relaxed boundaries. In an assembly line experiment, a human demonstrated a three-step task: pushing a block off a conveyor belt onto a workbench, picking it up and placing it on a table, then picking it up and throwing it into a basket. Each robot handled one step. Then the team shuffled them. The KUKA pushed, the DynaArm threw, the Neura picked and placed. The sequence completed without retraining on any configuration.
The key mechanism for avoiding singularities is what the researchers call a track cycle: when a robot approaches a singularity boundary, it redirects its motion along the boundary edge rather than through it, carefully following the edge until it reaches a safe configuration where it can re-enter the nominal path. This is algebra, not optimization.
This approach sidesteps the main trade-off in current robot skill transfer. The standard industrial fix is to build inverse kinematic models and slap on safety filters, which requires significant expertise and still does not guarantee robustness across hardware changes. The newer machine learning route reduces engineering effort but requires access to every robot in the fleet during training and produces black-box policies that can behave unpredictably near singularity boundaries, which in a factory can be catastrophic.
Our goal is to remove the need for technical expertise while still ensuring safe and reliable operation, said Durgesh Haribhau Salunkhe, the co-first author and a scientist in the lab. The user brings the idea and the desired behavior, and the robot should take care of the rest.
The researchers acknowledge significant limitations. Kinematic Intelligence does not currently integrate external sensing, so it cannot distinguish between moving a full container and an empty one, or apply common-sense safety checks like refusing to grab a knife when asked to prepare coffee. Deploying it in medical settings is bottlenecked by hardware constraints. The team estimates five years before mechanically safer robots enable that transition.
The paper appeared in Science Robotics on April 15. EPFL's Learning Algorithms and Systems Laboratory, led by Aude Billard, conducted the research. The full paper is behind a paywall; this article draws on the EPFL news release and reporting by Ars Technica. A competing approach called Intention-Aligned Imitation Learning, which uses natural language to align robot behaviors across different physical designs, was published separately in March 2026. No robot manufacturer has publicly committed to adopting the Kinematic Intelligence framework.
What the EPFL team has shown is that the portability problem in robotics can be solved with classical mathematics rather than learned policies. Whether that solution scales to real factory floors, where environments are unpredictable and humans move in ways motion-capture labs do not, is the question that determines whether this stays a lab result or becomes an industrial standard.





