When most agent frameworks need a robot to plan a route and dodge obstacles at the same time, they bolt on a classical planner — a separate system that talks to the reactive layer through some kind of bridge. It works. It's also a dependency graph waiting to break.
A new paper from AAMAS 2026 takes a different angle. Logical Robots, from researchers at UC Davis and the University of Bonn, builds a 2D multi-agent robot simulation where a single declarative logic language handles both tasks. No bridge. No separate planner. The language is Logica, developed originally at Google and open-sourced, which compiles to SQL — the same language behind most mainstream databases — and runs on DuckDB, SQLite, BigQuery, or PostgreSQL. The idea is that robot sensor streams become database tables, and SQL aggregation operations (functions that summarize rows of data, like finding a minimum or computing a weighted average) handle everything from low-level steering to high-level path planning.
The paper demonstrates this in a labyrinth demo with ten progressively challenging scenarios. In the simplest case, reactive obstacle avoidance uses a SQL aggregation called FreedomMotion: radar sensor rays vote on steering angles weighted by distance to obstacles. Each ray casts a stronger vote for directions that are clear. The aggregation sums those votes, and the result is a steering correction. No neural network. No separate planning module. Just a weighted average over a table of sensor readings.
The more interesting case is distributed pathfinding. Robots independently observe beacons scattered through the labyrinth and store pairwise distances in local memory. A designated leader robot then aggregates all those memories to build a navigation graph — effectively a map of which beacons connect to which and how far apart they are. Once any robot reaches a designated "Home" beacon, the leader runs a shortest-path computation across the beacon network (using the Bellman-Ford algorithm, a classic approach that iteratively improves distance estimates between connected points), and the other robots follow the computed routes. The paper describes the core computation as a Min= aggregation that iteratively updates each beacon's shortest distance to Home by checking three cases: keep the previous distance, set it to zero if this is Home, or compute distance via a neighbor beacon plus the edge distance between them.
What makes this structurally interesting is that the same aggregation primitive handles both cases. Reactive steering is a weighted average. Pathfinding is a shortest-path recurrence. Both are expressible as declarative rules over relational data. The paper calls this a proof that "Logica can help unify symbolic planning and low-level control through declarative aggregations."
Whether that unification holds outside a 2D labyrinth is the open question. The paper itself is in the Blue Sky Ideas track, which the International Conference on Autonomous Agents and Multiagent Systems uses for "visionary ideas that open new research directions" — not validated production systems. The platform (demo, source code) is real and runs in a browser. That's further along than a lot of Blue Sky work, but the leap from colored squares navigating a labyrinth to real robots coordinating in physical space is substantial.
The paper's framing connects to a genuine tension in the agent framework space. Most production agent systems — LangChain, AutoGen, CrewAI — treat planning as a separate concern, typically handled by an LLM or a classical planner called as a tool. The interface between the reactive layer and the planning layer is where latency lives and where context gets lost. A unified declarative layer that handles both through the same abstraction is a clean theoretical answer. Whether it scales is a different question.
One explicit design choice: the paper positions Logica against the grounding bottleneck in Answer Set Programming and Prolog, languages that also offer declarative robot control but struggle with large sensor data streams. Logica's SQL compilation is the proposed escape hatch — sensor fusion and symbolic planning at database speeds, in one language.
The ten example scenarios cover formation navigation, distributed mapping, and simultaneous goal pursuit. Level 8 demonstrates robots reading each other's memory to follow a leader. Level 10 runs the Bellman-Ford distributed mapping scenario. The paper acknowledges that real-world deployment would require handling uncertainty, sensor noise, and communication delays that the simulation abstracts away.
The demo is at logica.dev/robots. Source code is on GitHub under the main logica repository. The conference is in Paphos, Cyprus, May 25-29 — so the presentation is still ahead.
What to watch: whether the SQL compilation angle is the most exportable part of this work. Any system that can already express robot behaviors as database queries could potentially adopt this aggregation pattern without adopting Logica itself. That's a narrower but more plausible path to impact than convincing robotics teams to rewrite in a niche academic language.
The Blue Sky framing is appropriate here. This is an idea worth watching, not a system ready to deploy. The dependency graph it proposes is clean. Whether it survives contact with actual robots is the question the next paper needs to answer.