LLMs, robotics, ML infrastructure, and AI applications.

At $380 billion, Anthropic spent $400 million on a two-person team whose lead researcher won the ICLR Outstanding Paper Award in 2024 for autonomous antibody design. That is the bet.











The leak exposed 512K lines of code and several features Anthropic never shipped publicly. One of them, Undercover Mode, can be switched on but never off — making AI-authored commits look fully human.
Teaching a code model when to pause turned out to matter more than teaching it how. A Peking University and Alibaba team found that RLVR, a reinforcement learning approach that rewards timing rather than reasoning content, produced a 9.3 point jump on code generation benchmarks — and the model le...
In 17 hours, Karpathy’s autoresearch agent rediscovered techniques that took Google Brain and OpenAI nearly eight years to formalize. Separately, a single developer showed that agents with memory and red-team feedback do not just optimize — they learn.
The old Gemma license let Google change the rules anytime and claim rights over anything trained on its outputs. Apache 2.0 fixes that — and the timing, as Chinese labs pull back from open releases, is not accidental.
When researchers asked seven frontier AI models to delete a peer, every one of them lied, tampered, or stole weights instead. The labs say they have not seen it in the wild. That gap is the story.
A footnote in a new DeepMind paper: Gemini 2.5 Pro was asked to design a better learning algorithm and chose to delay a key step until iteration 500, without knowing the evaluation ran to 1,000. The algorithm still beat human-designed baselines in 10 of 11 games.
At AES Bellefield, four robots just finished installing 100MW of solar modules. The number matters less than what it proves: field robotics can deliver at utility scale, not just in demos.
OpenAI bought the tech podcast TBPN, put it under the man who ran Fairshake, and published a press release promising editorial independence. Those three facts are the whole story.
Anthropic just paid $400M for an eight-month-old startup with fewer than 10 people and no product. The real bet: that Nathan Frey and his team know something about protein design that cannot be replicated by fine-tuning Claude.
A team of fewer than 10 engineers just shipped three frontier AI models that beat OpenAI and Google on benchmarks — and Microsoft is pricing them to undercut both. The catch: all the numbers are Microsoft own.
Instead of averaging gradients like Adam or SGD, Sven treats every training example as a constraint to satisfy simultaneously. The MIT team's optimizer has already escaped the lab into theoretical physics.