LLMs, robotics, ML infrastructure, and AI applications.

Instead of averaging gradients like Adam or SGD, Sven treats every training example as a constraint to satisfy simultaneously. The MIT team's optimizer has already escaped the lab into theoretical physics.











On a Best Buy GPU, Google flagship open MoE model hits 11 tokens per second while Qwen 3.5 runs at 60 plus. The E2B edge model is the real story.
Sora burned through $15 million a day and made $1.4 million total. xAI is stepping into that crater with Grok Imagine Pro. The question is not whether Musk can generate 1080P video, but whether he can solve the economics OpenAI could not.
Only 200 megawatts of a 1.2 gigawatt AI data center project is actually running in Abilene, Texas. Microsoft is now filling the gap with a 900-megawatt on-site power plant next door — and it holds 27% of OpenAI.
The $180M in collapsed federal subcontracts is the visible cost. The invisible one is a D.C. Circuit case that could redefine how the government treats domestic AI companies — and every lab in the country is watching. (Source: Politico)
Altman says he miscalibrated public distrust of the Pentagon deal. The record shows he got the timing right instead.
The labs that couldn’t flag Tumbler Ridge are getting credit for preventing radicalization. That gap is the story.
Judge Lin blocked the Pentagon from using one law to blacklist Anthropic. The other law still stands — and three contractors already cut ties worth $180M+ as a result.
Amazon conditioned $35 billion of its OpenAI investment on the company going public or reaching AGI by December 31, 2028. The SEC filing shows the exact figure down to the penny: $34,999,999,447.98.
On SWE-bench, Qwen 3.6 Plus scores 78.8 — narrowly behind the top Claude scores on the benchmark that matters most for developers. The catch: it confabulates roughly once every four reasoning steps.
China named open-source AI a flagship strategy in its new five-year plan — a direct structural bet against the US closed-model approach. Here is what that means for the global diffusion of AI capability.
Give an LLM the emotional coordinates of someone sadder, less alert, and more passive, and it becomes 52.7 percent safer on HarmBench. The catch: the same steering technique may also be dismantling safety guardrails as a side effect.