When a development team hooked an AI model to a popular tool-sharing protocol and ran it the obvious way, giving the model access to everything at once, the model spent more than a fifth of its working memory just reading the menu. More than 20 percent of a 200,000-token context window, gone before the model solved a single problem. That measurement, described by the protocol's co-creator at a conference in London last week, is the actual constraint on what the agent infrastructure boom can deliver: the technology exists to connect AI models to external tools at scale, but naive implementations of that connection consume the very context window that makes reasoning possible.
The fix has a name: progressive discovery, loading tools on demand rather than all at once. But it has a problem. The protocol that defines how AI models call external tools, called MCP (Model Context Protocol), supports that pattern; it does not require it. Every team that built the obvious way first is now living with the consequences. OpenAI's agent SDK and LangChain both pulled MCP in as a dependency, which means the naive implementation is running inside production systems right now, burning context on tool listings instead of on the actual task.
At Uber, the scale made the problem unavoidable. Five thousand engineers, 10,000 internal services, 1,500 monthly active agents completing 60,000 executions per week, as described at the MCP Dev Summit North America. Making that work required building a control layer on top of MCP itself: a central gateway handling tool registries, authentication, and access governance that the protocol does not provide. MCP defines the interface. The reliability lives in everything built around it.
That governance gap is what the protocol's co-creator, David Soria Parra of Anthropic, is now trying to close. His second bet is harder than the first. At AI Engineer Europe, he outlined a 2026 roadmap that extends the protocol from sharing which tools an organization has to sharing what it knows: domain knowledge, institutional context, the accumulated data behind the tools themselves. The argument is that an agent does not just need to know it can call a spreadsheet. It needs access to what the organization actually knows about spreadsheets. MCP as a knowledge distribution layer, alongside its existing role as a tool connectivity layer.
The numbers confirm the first bet paid off. MCP crossed roughly 97 million monthly SDK downloads by March 2026, confirmed across multiple independent package manager readings — not just the 110 million figure Soria Parra cited at his own keynote days later, which was self-reported at his own conference. These are real systems under real load. The protocol works.
The skeptic's case does not require a technical argument. Soria Parra benefits from MCP being central to the agent stack. His keynote is vendor marketing with a credible face. The jump from tool connectivity to knowledge distribution is the kind of roadmap that sounds inevitable on a conference stage and is much harder in production: it runs into every legacy system, undocumented workflow, and format inconsistency that actual organizations have accumulated.
What makes the second bet worth watching is not whether the architecture is sound. It is whether the organizations that bet on MCP in 2025 will bet again in 2026, or whether the governance gap and the context window problem will convince them that the protocol is scaffolding rather than foundation.