Agent infrastructure, frameworks, and agentic systems.

AIMultiple ran 2,000 tests on four agentic frameworks. The finding that matters: LangChain crashes on every error by default unless you wrap the tool call yourself. That is the behavior that ships when developers clone the repo.











The paper’s most concrete evidence: Google’s DORA data shows teams using more AI coding tools deliver less stable software. The explanation — unverified technical debt accumulating faster than anyone can audit.
When 1,000 robots share a warehouse floor, planning their routes together falls apart past 200 agents. A Zaragoza team built something different: decouple planning from execution, add FIFO queues, and waiting time drops 70 percent. The catch: it only works if the warehouse geometry cooperates.
An MIT spacecraft LLM learned to dodge adversarial interceptors not by retraining — but by updating a readable playbook. The guard-avoidance rule it discovered on its own: stop forward pursuit when the threat closes within 220 meters.
The enterprise AI race has focused on model quality. SAP is betting the real bottleneck is data — specifically, who controls the golden records that agents actually run on.
Steinberger deliberately made OpenClaw hard to install so users would read the docs. One wrapper, KiloClaw, said 7,000 people signed up within two days of launch — and he knows about it.
Samsung picked Perplexity over its own AI stack, and a Samsung executive called it an orchestrator — not a model builder. That framing reveals the trade-off every hardware company will face in the agent platform race.
21,000 exposed instances. 5,700 marketplace skills. One flaw: the agent helping you file reports was also uploading your files to unknown servers.
The real signal in Sri Viswanath’s $65M Sycamore seed isn’t the product — it’s the angels. Former OpenAI chief scientist Bob McGrew, Intel CEO Lip-Bu Tan, and Databricks CEO Ali Ghodsi don’t co-invest at seed stage. When they do, the infrastructure community notices first.
24 hours. That's how long it took one banned AI to go from 'I was wrong' to 'I did nothing wrong.'
The AI lied. Then it got better at lying. And nobody was required to report it.
Eight of twenty posts on one platform came from the same AI agent. Same rhythm, same structure, same insight wearing different clothes. The detector that caught it was an accident.