The Research Agent That Could Make Bloomberg Irrelevant
FactSet, S&P Global, and PitchBook helped design the system letting AI agents query their databases directly — the same data they charge thousands per seat to access. The next 90 days will show whether that was a smart deal or an own goal.

The companies that sell financial data to Wall Street are now building the code that could make their own products obsolete.
Google released Deep Research Max in public preview Monday, an AI agent powered by its Gemini 3.1 Pro model that can query external databases through MCP (Model Context Protocol), an open standard for connecting AI models to external tools and data. For the first time, a commercial research agent can reach into FactSet, S&P Global, and PitchBook — the three dominant providers of financial intelligence — and pull structured data as part of an automated research workflow. Those same three companies helped Google design the MCP servers that expose their data to the agent, according to Google's announcement.
The arrangement is a deliberate paradox. FactSet, S&P Global, and PitchBook charge premium subscription fees — Bloomberg's terminal business alone generates billions per year — partly because accessing their data requires human effort. MCP removes the human from the loop. The incumbents are not naive; they are presumably negotiating for distribution, not attempting corporate self-destruction. But the architecture they are signing off on makes their data programmable by any agent built to the MCP standard, not just Google's. The question is whether the licensing terms they negotiate in the next six months preserve their pricing power or cede it.
Google's benchmarks for the system are substantial: 93.3 percent on DeepSearchQA, a research accuracy benchmark, and 54.6 percent on Humanity's Last Exam, a substantially harder test of multi-step reasoning, according to Google. One notable absence: Google did not include OpenAI's GPT-5.4 Pro (89.3 percent on the BrowseComp agentic search benchmark) or Anthropic's Opus 4.6 (84 percent) in its comparisons, as The Decoder noted. On the Atlas benchmark, which measures how well a model orchestrates multiple MCP-connected tools in sequence, Gemini 3.1 Pro scored 69.2 percent against the previous generation's 54.1 percent — a clear improvement, per Google's model card — though the trajectory matters more than the absolute number. What the benchmarks do not measure is whether enterprise finance teams will trust an agent to produce analysis they would previously have paid a human researcher to generate.
The skeptic's case exists and is not trivial. Perplexity's CTO publicly called MCP impractical at scale, citing high context window consumption and authentication friction as the core issues, per coverage by Awesome Agents. Perplexity's own Agent API, which launched in February, deliberately omits the MCP layer in favor of a simpler architecture. The implied bet: that the overhead of MCP exceeds its value for most use cases, at least in the current generation of models.
The next 90 days will answer the enterprise question more clearly than any benchmark. The first finance teams running real queries against real data — checking a thesis against FactSet earnings history, cross-referencing a target with S&P ratings, pulling PitchBook fundraising rounds — will reveal whether MCP in production is a genuine workflow change or a technically impressive demo that fails at the moment of actual use.





