The most important scientific papers, decoded. 305 papers analyzed from arXiv and beyond.
Classical computers can now simulate noisy quantum circuits as fast as shallow ones — depth no longer gives quantum hardware an edge. A new Nature Physics result is a formal proof, not a guess.
Quantum ground state of rotation achieved for the first time in two dimensions
MIT grew moiré crystals by the thousands using chemical synthesis instead of hand-assembly. Inside them, electrons inhabit a four-dimensional electronic world that researchers can now map at scale. No, the electrons are not actually in a fourth dimension.
Three years. No gate threshold. No qubit count. No definition of scientifically relevant. The Department of Energy wants a fault-tolerant quantum computer by 2028 — and has not published a single technical requirement.
At peak, a Chinese state-sponsored hacking group's AI was firing thousands of requests per second — attack speed no human could match. The September 2025 campaign used Claude Code to automate 80–90% of its operations across roughly 30 targets.
The faster your robot's processor, the more out-of-date its information becomes. Tsinghua's new framework finally breaks that paradox.
Backup CBF and Model Predictive Shielding are safe, but they sometimes intervene when they do not need to. A new University of Michigan paper explains the structural reason why and points to a fix.
A new model trained on 400 million patient records simulates what would happen if a patient were older, or had elevated kidney markers, or received a different drug. The catch: it mostly reproduced what clinicians already knew.
Researchers at Florida State University and SUNY Buffalo built an MoE model with no learned router — the component every major system has treated as irreducible. LiME uses zero router parameters, cuts trainable params 4x, and trains 29% faster on a 47-task multimodal benchmark.
Power laws govern how influence concentrates in large LLM multi-agent systems, a Virginia Tech preprint finds. Adding agents generates more coordination, but the merge step that synthesizes reasoning does not scale proportionally.
A simple RL+graph-search baseline — not a frontier LLM — scored 12.58% on ARC-AGI-3, outperforming every commercial model by 30x. GPT-5.4 hit 0.26%. Gemini 3.1 Pro: 0.37%. Humans: 100%.
Predicting when a quantum system loses track of its initial state has required either exponential time or statistical averaging over many identical preparations. A new proof claims to eliminate both, using only the geometry of high-dimensional Hilbert space.
Human groups learned from feedback and stabilized across games. LLM groups did not — reacting at nearly double the human rate to the same error signal, and never once holding a guess constant.
A CoreThink AI pipeline that separates perception from rule induction pushed a weak LLM from 16% to 24.4% on ARC-AGI-2 without fine-tuning — and the ablation numbers show why the result matters for the test-time scaling debate.
The most interesting engineering claim in a new multi-agent paper is a database design choice: agents stored as dormant seeds, woken only when needed. It sounds modest. The implications for how many agents a system can run are not.
ThoughtSteer poisons a single embedding in continuous latent reasoning models. The model encodes the right answer in its hidden states while outputting the attacker’s choice — and every existing token-level defense misses it, surviving even 25 epochs of clean fine-tuning.
No quantum computer factored a 2048-bit RSA key. But Qrisp 0.8 compiled the full billion-gate circuit — the first gate-level Shor assembly at encryption-breaking scale, producing concrete qubit budgets the field has never had.