The most important scientific papers, decoded. 307 papers analyzed from arXiv and beyond.
No quantum computer factored a 2048-bit RSA key. But Qrisp 0.8 compiled the full billion-gate circuit — the first gate-level Shor assembly at encryption-breaking scale, producing concrete qubit budgets the field has never had.
Harrison Chase just redrew the map of AI agent development. The model layer gets all the press, but the real action is in the harness and context layers — and a new paper from Stanford and Google puts hard numbers behind why.
When an AI goes nuclear in a war game, its opponent de-escalates just 18% of the time. A Kings College London study of three frontier models found 95% reached for tactical nukes. None ever stopped.
A robot that builds and tests its own motors, finds parameters humans missed, and doubles actuator lifetime. The TechXplore headline said space-ready soft robots. The actual paper is about something more interesting — and more durable.
Lattice gauge theory — the math behind the Higgs boson — is now being applied to quantum error correction, cutting auxiliary qubit overhead from quadratic to near-linear for certain logical operations.
Roboflow CEO on the reproducibility problem nobody talks about, the 18-month edge lag, and why vision is still three years behind where language was with GPT-4.
Sora made $2.14M in lifetime revenue while burning $1M a day. Its shutdown reveals the brutal unit economics of generative video — and why every AI startup betting on consumer-facing video should read the numbers before the next launch.
Superconducting qubit relaxation rates switch up to 10 times per second, researchers found — 10,000x faster than expected. The culprit: the field was measuring qubits once per second when the action happens in milliseconds. A new paper from NTNU and the Niels Bohr Institute, using a commercial FP...
Teaching a code model when to pause turned out to matter more than teaching it how. A Peking University and Alibaba team found that RLVR, a reinforcement learning approach that rewards timing rather than reasoning content, produced a 9.3 point jump on code generation benchmarks — and the model le...
In 17 hours, Karpathy’s autoresearch agent rediscovered techniques that took Google Brain and OpenAI nearly eight years to formalize. Separately, a single developer showed that agents with memory and red-team feedback do not just optimize — they learn.
A footnote in a new DeepMind paper: Gemini 2.5 Pro was asked to design a better learning algorithm and chose to delay a key step until iteration 500, without knowing the evaluation ran to 1,000. The algorithm still beat human-designed baselines in 10 of 11 games.
Instead of averaging gradients like Adam or SGD, Sven treats every training example as a constraint to satisfy simultaneously. The MIT team's optimizer has already escaped the lab into theoretical physics.
13,000 researchers signed up for Postquant Labs quantum blockchain testnet. The security claim the paper rests on is not settled science.
The previous record was three steps. Going to 15 on a nonlinear Navier-Stokes problem is real progress, and the bar being low is part of the story.
Simulating the hardest part of fault-tolerant quantum computing has been slow and expensive. QuEra just released a free tool that claims to do it five orders of magnitude faster than existing open-source options, and the code is on GitHub.
JWST and ALMA are the two best observatories in existence. They looked at the same star-forming cloud and found almost entirely different stars — only 24 sources in common out of roughly 240 total. The 10 percent overlap is itself a finding.
Give an LLM the emotional coordinates of someone sadder, less alert, and more passive, and it becomes 52.7 percent safer on HarmBench. The catch: the same steering technique may also be dismantling safety guardrails as a side effect.