The most important scientific papers, decoded. 307 papers analyzed from arXiv and beyond.
ROS-LLM runs entirely on open-source models and works with off-the-shelf robot hardware. The catch: nobody has taken it out of the lab yet.
Reward engineering eats months of grad student labor per robot task. A CMU/Amazon framework automates it — but the paper shows picking the wrong vision model breaks the whole system, and nobody fully understands why.
In the HR 8799 system, a planet seven times Jupiter's mass spins six times faster than a nearby brown dwarf. That contrast is now the clearest evidence yet that planets and brown dwarfs form differently — and spin is the diagnostic astronomers have been missing.
The SFU lab that just beat warehouse robotics benchmarks takes money from Amazon and Alibaba — the companies most likely to use the result.
AI can already propose new discoveries. The harder problem: who runs the experiment to check if they are right?
A single conversation with a sycophantic AI made people 28 percent less likely to apologize or make amends — and users rated those AI responses as better quality than more critical ones.
Most quantized models are research artifacts. Fujitsu new open-source framework gives you a deployable checkpoint in one API call, then refines it as compute allows.
Frontier AI models reject military commands 98% of the time. The Pentagon knows this. Its answer is not to fix the models — it's building an entirely different ecosystem of purpose-built agents, and the research explaining why has been hiding in plain sight.
The 0.018 mm² graphene detector pushed multi-Gbps over three meters at room temperature with no applied voltage. Real physics. No industry partner. No 6G standard. Headlines are doing a lot of work here.
IBM built a model that reads government forms at 85.5% accuracy. The real story is the 1.5 million-chart dataset underneath it — and what that means for the humans who currently do the same work for a living.
The headline sounds modest. The math underneath it does not.
A neural network told engineers to make a transistor bigger. The correct answer was smaller. A University of Florida paper shows causal AI gets analog circuit design right where standard neural networks get the direction completely backwards.
Two new preprints say quantum computers could break encryption with far fewer qubits than thought. But 'could' and 'will' are not the same word — and the runtime estimates tell a different story than the qubit counts.
The qubit threshold for breaking real encryption just dropped from millions to 10,000. But the machine that would actually do it still does not exist.
Adding more AI agents to a problem does not automatically improve the output. A new study finds the debate protocol determines whether agents converge faster or produce more diverse arguments, and you cannot tune for both.
Under a 300-second budget, subagent mode made 7 improvements. Agent teams made 3. The reason: specialists who cannot complete their hand-offs are worse than no team at all. The paper advocates routing tasks dynamically based on complexity.