The most important scientific papers, decoded. 305 papers analyzed from arXiv and beyond.
The paper's key insight: judge feedback quality by whether the author revised the paper. Train on 19,534 ICLR submissions using that signal, and a small open-weight model beats Gemini-3-flash on precision: not by being smarter, but by learning from what actually worked.
The quantum computing field has treated cryostat cable density as a physics wall. A Chalmers paper suggests it is just a Tetris puzzle the field never solved.
A galaxy 5 billion light years away bent light from a supernova 9 billion years distant, magnify it 100x, and make it the most studied stellar explosion in cosmic history. Now astronomers need to watch the clock.
Detecting the Schrödinger-Newton effect would tell physicists something no experiment has answered: whether gravity stays classical when matter goes quantum. A new paper lays out a roadmap — if the engineering can catch up.
The USTC paper shows that consensus-as-stopping-signal resolves 80% of queries at 980 tokens — less than a fifth of what standard multi-agent debate burns through. The real story: MAD was never broken, it just had no off switch.
A new empirical study trained AI agents to resist manipulation for 10 generations. The agents got better at their jobs. They did not get harder to trick. That tradeoff is a structural constraint, not a bug.
Every RL system in production — from recommendation engines to game-playing AI — was built on the assumption that maximizing rewards is correct. A 2024 paper says that assumption is a 30-year-old mistake.
A new theoretical result from UC Irvine says quantum information loss is in principle reversible — but the precision required puts it beyond anything current hardware can deliver.
False vacuum decay made headlines this week as a cosmic doomsday scenario. The real story from Tsinghua is quieter but more interesting: a programmable quantum simulator that can probe the phenomenon atom by atom.
Training a robot to scratch someone requires data. Five million Americans with paralysis cannot generate that data by demonstrating on themselves. CMU researchers solved it with synthetic training data — from a text prompt, entirely in simulation — with 80% real-world success.
When LLMs generate quantum code, they fail in two distinct ways: wrong framework API or wrong quantum algorithm. A new benchmark shows feedback repair closes the first gap, not the second.
An ArXiv paper argues that the way AI agents execute mutations today is structurally broken — and proposes a protocol to fix it.
Measuring heat exchange with an ancilla qubit could replace exponential-effort tomography as a test for whether a quantum processor is doing something classically hard.
A new paper from Waterloo shows how a simple architectural change lets recurrent neural networks see across the distances that quantum physics actually requires — without the computational cost of transformers.
LOM-action from Yonyou AI Lab hits 93.82% accuracy and 98.74% tool-chain F1 vs frontier models at 24-36% F1 — same accuracy class, completely different operational reality.
A team at Innsbruck, Aachen, and Julich has shown that mid-circuit measurements are not actually necessary for fault-tolerant operation. The catch: it is a proof-of-concept, not a product.
Block fired 4,000 people and called it gravity. A UPenn/BU paper explains why the math is a Prisoner s Dilemma, why better AI makes it worse, and why the comfort policies all fail. Preprint, not peer-reviewed.