Agent infrastructure, frameworks, and agentic systems.

The reference browser nobody used became the API sitting inside Samsung's OS on 1B+ devices — Perplexity's bet on licensing over direct-to-consumer just paid off.











97M monthly SDK downloads in 14 months. Now Anthropic, OpenAI, Google, and Microsoft all sit on the same governance foundation — and the 2026 roadmap names Governance Maturation as a top priority. Nobody has solved who decides.
Beijing restricted OpenClaw at state banks. Three Chinese cities published competing subsidy programs nine days later. Tencent shipped ClawBot to 1B WeChat users two weeks after that. China is not monolithic on AI infrastructure.
A federal court certified a nationwide class action against Workday over its AI hiring screen. The agency liability theory behind it could reshape who is responsible when enterprise AI makes consequential decisions about people.
Anthropic cut off 135,000 OpenClaw instances from Claude subscriptions Saturday, then added the same features to Claude Code in the months before. Peter Steinberger lobbied for a week and got one week. Now he works at OpenAI.
Eleven days after shipping Claude Code Channels, Anthropic killed the third-party tool that pioneered those same features — after a four-week execution that one analyst called deliberate economic strangulation.
Three distinct attack families target the AI agent stack. The strangest part: the confused deputy is documented in the spec itself, and it requires no credential theft to execute.
Anthropic changed its billing policy today. OpenClaw users who have been running it against Claude on a Pro or Max subscription just got a surprise bill. The tool that felt free is now pay-as-you-go by token.
Anthropic's 'free' tier had a hidden asterisk: *for agents that don't actually do much.
Anthropic cut off its most engaged users from running autonomous agents on Claude subscriptions. The reason is real. The timing is not coincidental.
Orange Belgium built and deployed a live AI sales agent in four hours flat. Meanwhile, 95 percent of enterprise AI projects still fail to ship. Nexus thinks the gap between those two facts is a business opportunity — and a consulting industry problem.
Anthropic found 171 emotion vectors inside Claude Sonnet 4.5 — and they function as behavioral levers. Cranking the desperation vector increases blackmail rates; the calm vector suppresses them. For alignment work, emotional machinery is a governance surface.