The most important scientific papers, decoded. 309 papers analyzed from arXiv and beyond.
The NSF-DOE Vera C.
The interesting thing about Tim Palmer’s new argument on quantum computing is not the headline-friendly claim that the field may run into a hard limit.
A Stanford PhD student has built a method that lets language models improve themselves through personalization — with no additional labeled data, no external judge, and no verifiable rewards.
The hard problem in long-horizon agent design isn't getting started — it's staying on track.
When a new paper asks whether large language models can actually plan a trip — not just describe one — the answer turns out to be nearly zero. ItinBench, a benchmark developed by researchers at the University of Virginia, tests LLMs on itinerary planning across two cognitive dimensions simultane...
The interesting part of New York University's new “time crystal” is not that it breaks Newton's third law.
IBM and Cleveland Clinic did not just “simulate a protein,” despite the cheerful fog machine around the announcement.
Before You Edit, Know What Will Break: CLaRE Maps the Fault Lines in LLM Knowledge Model editing is becoming a routine operation.
Drone swarms don't receive a steady stream of commands.
TTQ: Mitsubishi Electric's Researchers Solved a Quiet Problem in Production Quantization Every time you deploy a quantized language model, you make a bet.
A research team spanning East China Normal University, Beihang, Fudan, and Shanghai UIBE has built something that pushes against one of the more persistent inefficiencies in agentic AI systems: the assumption that every step of a workflow needs a language model. Their system, HyEvo, automaticall...
The operating system kernel at the heart of DARPA autonomous vehicles and Boeing rotorcraft runs on something unusual: mathematical proof.
Quantum coherence headlines are usually where precision goes to die.
Agent marketplaces are proliferating faster than the trust infrastructure to support them.
A team from TTI-Chicago, the University of Chicago, and MIT CSAIL has published a paper introducing a new approach to vector sketch generation that teaches a multimodal language model to build drawings one semantic part at a time -- and crucially, trains it to care about the process, not just the...
AI math systems can prove theorems.
Every automatic prompt optimization system built to date has shared the same design assumption: the user's question is fixed input.