The Trust Machine ThatForgot the User
Former IETF HTTP Working Group chair Mark Nottingham argues that AI agents represent a fundamentally new trust problem: machines acting on users' behalf in a world of other machines with their own interests, with no negotiated rules framework. He contrasts this with browsers,…

A wristwatch tells time. A screwdriver turns screws. The relationship between tool and user has always been simple: you point, it acts, nobody asks whose side it is on.
Mark Nottingham wants to know what happens when that stops being true.
Nottingham, who chaired the IETF's HTTP Working Group during the standards battles that shaped the modern web, published a blog post Friday arguing that AI agents represent something the computing industry has never built before: a machine that acts on your behalf, in a world of other machines that have their own interests, with no agreed-upon rules for any of it. The short version of his argument is this: browsers gave users collective leverage through standards. AI agents give vendors a blank check.
The post, titled "What's Missing in the 'Agentic' Story," has been circulating in standards and security circles since Friday. It is not a technical paper. It is an opinion piece from someone who has spent decades thinking about how the internet negotiates trust between parties that don't fully trust each other — and Nottingham's opinion is that the agentic AI moment has no negotiating framework at all.
The browser analogy
Nottingham's central argument runs through the web browser, which he describes as a form of "collective bargaining" between users and websites. When you open a page, your browser doesn't just display it — it sits between you and the site, enforcing a set of negotiated constraints on both sides. The site can access your screen and keyboard, but not your filesystem. You can run their code, but not read their server. These limits aren't enforced by goodwill; they're written into standards developed at the IETF and W3C, with input from browser makers, site operators, and advocates for users. The result, Nottingham argues, is something like a treaty — a set of rules both sides agreed to, arrived at through a process where neither party had all the power.
The mobile era fractured this model. iOS and Android are user agents in the same conceptual sense as browsers, but the decisions about what they allow and don't allow are made inside one corporation, opaquely, with no formal process for users or sites to contest them. Apple can change what Siri can do tomorrow. Google can shift what Android apps can access with six weeks notice. The market gives users some choice, but not the kind of negotiated, accountable leverage that the web standards process produced.
AI agents, Nottingham argues, are heading straight for the same trap — except worse, because an agent doesn't just render a page. It books flights, sends emails, moves files, calls APIs, spawns other agents. The surface area for something to go wrong, or to serve interests other than yours, is orders of magnitude larger.
The attack surface
The federal government has started trying to measure exactly how large that surface is. In January 2025, researchers at NIST's Center for AI Standards and Innovation ran red-team exercises against AI agent deployments using an enhanced version of the AgentDojo evaluation framework. Their finding: novel prompt injection attacks — where a malicious actor manipulates an agent through crafted inputs — achieved an 81 percent success rate. The baseline defense rate was 11 percent. That is a 7x gap between what attackers can do with optimized techniques and what defenders stop by default.
The work was documented by the Cloud Security Alliance in March, where it appeared as a single paragraph in a longer research note. The broader federal response has been to treat the problem as a standards question, not a safety question.
NIST launched its AI Agent Standards Initiative in February 2026, organized around three pillars: industry-led standards development, open-source protocol work, and fundamental security research. The National Cybersecurity Center of Excellence published a concept paper in February proposing to adapt existing identity frameworks — OAuth, SPIFFE, OpenID Connect — for AI agents as a new class of digital principal. A concept paper is not a standard. It is an intention.
The IETF, the internet's core standards body, chartered a working group in early 2026 targeting a standards-track specification by April 2026 and a best current practice document by August 2026, according to research by No Hacks. Several large platform vendors — Google, Cloudflare, Akamai, Amazon — have signaled support for the Web Bot Auth concept as one component of a trust layer.
The hard problem
What none of these efforts have solved is what practitioners call multi-hop delegation: the scenario where Agent A spawns Agent B, which calls Agent C, in the course of completing a single user task. The current OAuth standard handles single-hop delegation cleanly — a customer service agent acting on behalf of a human is a tractable authorization problem. But when the chain grows longer, with agents spawning sub-agents whose actions need to be attributable back to the original user, no standard exists to govern it.
As WorkOS documented, NIST's own concept paper flags multi-hop as an open question, not a solved problem. If your architecture depends on agent chains, you are in uncharted territory.
This matters because enterprise deployments are already doing it. The gap between what practitioners are shipping and what standards cover is not theoretical — it is the gap between "works in demo" and "auditable in production."
What a user agent would actually require
Nottingham's prescription is a user agent role for AI — a defined standard for what an agent acting on your behalf can and cannot do, with accountability mechanisms that users, sites, and services can reason about. The W3C's Workshop on Smart Voice Agents, held in early 2026, began identifying what that might look like: standardized protocols for agent-to-agent communication, frameworks for user consent and delegation, and mechanisms for transparency in multi-agent conversations. The IETF's draft on AI agent authentication and authorization — covering agent identifiers, credentials, attestation, and monitoring — represents the most concrete technical work in this direction so far, though it remains an early-stage document.
None of it yet adds up to the kind of negotiated, accountable framework that browsers achieved through years of standards battles. The difference, practitioners note, is that browser standards were largely worked out before the commercial web mattered. Agent standards are being developed in parallel with billion-dollar deployment decisions.
The clock
The IETF's April deadline for a foundational spec is weeks away. Whether the working group produces something substantive on that timeline, and whether it addresses the multi-hop problem that makes real-world agent chains auditable, is the near-term question to watch. NIST's standards initiative has three pillars and no published timeline for deliverables beyond "in development." The threat model — 81 percent attack success rates — is already documented. The protection model is not.
Nottingham does not offer a roadmap. He offers a diagnosis: the industry has spent fifty years assuming computers do what you tell them, and that assumption is retiring quietly, without a plan for what replaces it. The browser stumbled into collective leverage. AI vendors are stumbling right over it.






