10,000 MCP Servers, Zero Threat Mitigation Built In
According to researchers presenting at RSAC 2026, the Model Context Protocol's security flaws are architectural — a consequence of design choices that prioritized ease of adoption over threat mitigation.

image from FLUX 2.0 Pro
The Model Context Protocol was built to make AI agents useful. Its security problems were built in at the foundation.
That's the central argument from security researchers tracking MCP vulnerabilities, and it's the focus of a Dark Reading article published this week headlined "AI Conundrum: Why MCP Security Can't Be Patched Away." The piece synthesizes research being presented at RSAC 2026 next week, including findings from Token Security researcher Ariel Simon, whose talk is titled "MCPwned."
MCP, developed by Anthropic and increasingly adopted across the AI industry as a de facto connectivity standard, allows large language models to interact with external systems — databases, file systems, cloud platforms, SaaS applications. Amazon, Microsoft, Google, and OpenAI all offer MCP integration. The protocol is described by practitioners as "USB-C for AI" — a universal interface that lets any AI agent connect to any compatible server. There are now more than 10,000 publicly available MCP servers in various directories and registries.
The security community's concern is that this rapid adoption has outpaced security engineering. According to research cited across multiple sources, MCP was designed for ease of adoption, not threat mitigation. The result is a class of vulnerabilities that researchers describe as architectural — problems rooted in the protocol's assumptions about trust, not in specific implementation bugs.
The specific flaw categories are well-documented at this point. According to an analysis from Pivot Point Security and corroborated by Endor Labs research, these include prompt injection (malicious instructions embedded in data the LLM processes), tool poisoning (where a trusted MCP server is replaced by a malicious one), credential theft, excessive privilege grants, SSRF, path traversal, and OAuth misconfigurations. The "confused deputy" problem is a recurring theme: because MCP servers often operate with broad access scopes, a compromised or malicious server can exfiltrate data through permissions the user didn't know were being exercised.
The numbers support the urgency. According to Adversa AI's March digest, 30 CVEs related to MCP were filed in a 60-day window — a disclosure rate that reflects both genuine vulnerability density and growing scrutiny.
At RSAC 2026, Simon will demonstrate a remote code execution flaw in Microsoft's official Azure MCP server that could allow an attacker with network access to extract Azure credentials and compromise a victim's Azure and Entra ID environment. Simon's framing, per the conference abstract: traditional security controls like API gateways and WAFs cannot validate agent identity based on environment attestation or verify context authenticity in MCP workflows. The dynamic, autonomous nature of MCP interactions exposes a fundamental mismatch between how security tools think about access and how MCP actually operates.
The Dark Reading piece quotes a researcher making the point directly: if Anthropic — the protocol's originator — gets the security model wrong in their official reference implementation, the entire ecosystem can get it wrong. Anthropic did not respond to Dark Reading's request for comment.
Our read: this framing is important and correct. "Architectural" doesn't mean unsolvable — it means the fix isn't a code patch, it's a redesign of the trust model. Whether enterprises deploying MCP servers today are willing to accept that reframing is a different question. Ease of adoption beat security in the protocol's design. Now the bill is coming due.

