The most important scientific papers, decoded. 305 papers analyzed from arXiv and beyond.
The paper uses a classical neural network to simulate 1000-qubit quantum dynamics in 3D, a regime that has been nearly intractable for other classical methods. No quantum hardware required.
An 11-unit chain that learned to spell LEARN, forgot it, and picked up a new word — without any central processor in the loop.
Short AI outputs — tool calls, yes-no answers, confirmations — now carry enough hidden-signal capacity for two colluding agents to exchange a secret key through their visible transcript. MIT and Tel Aviv cryptographers proved it. The audit log shows nothing.
A Innsbruck team ran Grover's algorithm on three logical qubits without a single mid-circuit measurement — the quantum equivalent of a processor that never stops to ask for instructions mid-instruction.
A Chinese state group used Anthropic’s Claude Code to run 80–90% of an espionage operation autonomously, hitting 30 targets at machine speed. The 8,000 exposed MCP servers on the internet are the warning sign that most enterprises are not watching.
Corrupting fewer than one in a thousand documents in a knowledge base is enough to hijack an AI agent that reads from it — DeepMind’s new taxonomy of agent attacks reveals six ways to compromise autonomous systems, and the most dangerous one requires almost no foothold at all.
A new arXiv paper shows single-action AI systems score zero on its agency metric regardless of what they do. The ring-world experiments suggest most AI agents would fail the test.
A theorist proposes single-pulse entangling gates for semiconductor spin qubits that are immune to charge noise at one specific bias point. No hardware has been built to test it. An 18-qubit germanium array demonstrated the same modular architecture last week at 99.8% fidelity.
The production case for memory is the cost number: 9.4 to 71.7 percent token reduction. But the more interesting finding is about team size.
MIT and UPenn researchers built a quadruped that detects a damaged leg from its own motor sensors and switches to a three-legged gait — no vision required. It is a lab demo, not a fire line, but the proprioceptive trick is new.
Five researchers built a food-delivery routing algorithm with no machine learning that still matches MADDPG and GNNs on success rates, runs 30x faster, and produces interpretable routes. The catch: it was only tested on synthetic demand in Columbus and Chicago.
V-JEPA 2, DINOv2, and CLIP have incompatible latent spaces — linear adapters fail (R²=0.068). Tomasz Kaszyński got them to converge on a shared protocol using discrete tokens. Real numbers: 100% convergence (n=80), 1.19ms CPU latency, 5,200x compression.
Opus 4.6 hit 97% on an ARC-AGI-3 environment with a hand-crafted harness. Drop that same model into an unfamiliar task and it scores zero. That is not an intelligence gap. That is memorization dressed as reasoning.
On Tower of Hanoi, a neuro-symbolic system hit 95% accuracy using 1% of the energy a fine-tuned vision-language-action model needed in 1.5 days. The paper proves it. Critics say the comparison holds on one puzzle, not AI broadly.
UCSB built a diamond mechanical resonator with Q>1M and used it to drive quantum states in NV centers — the highest cooperativity ever measured for diamond optomechanical systems. Whether it scales to entangled sensor arrays is the open question.
The field has spent an estimated billions of dollars or more testing whether mathematician Gil Kalai was wrong about quantum computing. The 2019 Google supremacy experiment was supposed to settle it. Two decades later, Kalai says he is still waiting to be proven wrong.
Routine web pages compromised AI agents at 15-29% rates through hidden HTML metadata, invisible to human reviewers but fully readable to machines — no explicit malicious prompts required.