The Lowest-Paid Person at the Bank Determines Your AI's Quality
The problem with deploying AI agents in large enterprises is not, generally speaking, the AI.

image from GPT Image 1.5
The problem with deploying AI agents in large enterprises is not, generally speaking, the AI. The problem is that roughly 70 percent of the knowledge those agents need was never written down in the first place.
That is the thesis behind Interloom, a Munich-based startup that announced on March 23, 2026, that it has raised $16.5 million in a new funding round led by DN Capital, a London-based venture firm, with participation from Bek Ventures and existing backer Air Street Capital, also London-based and focused on AI-first companies. The round follows a $3 million seed that Air Street led in March 2024. Fortune first reported the raise.
The idea draws on a concept coined by Michael Polanyi, the British-Hungarian philosopher and scientist who observed that expertise is largely non-transferable through documentation alone. "We know more than we can tell," Polanyi wrote — and for Interloom's founder and CEO, Fabian Jakobi, that gap is now a production problem for every company trying to deploy AI agents at scale.
"The most important person at the bank is the person who knows whether the documentation is right or not," Jakobi told Fortune. "They're often the lowest paid. But they determine quality."
Interloom's answer is what it calls a context graph: a continuously updated map built by ingesting millions of operational records — support emails, service tickets, call transcripts, work orders — and inferring from them how processes actually get resolved, not how some consultant's flowchart says they should. Jakobi's analogy is Google Maps: the navigation app doesn't plan routes by reading traffic regulations; it learns them from real-time behavior. Interloom does the same for institutional knowledge.
This is a meaningfully different bet from deploying a retrieval-augmented generation (RAG) system over company documents. RAG retrieves what's written. Interloom's context graph captures what was done — the resolution paths, the escalation patterns, the workarounds that experienced staff apply without thinking. The distinction matters because most enterprise documentation is either incomplete, conflicting, or simply absent for the decision types that actually determine quality.
The company has gone past demos. At Commerzbank, the German multinational bank, Interloom analyzed millions of customer support emails and benchmarked them against existing internal documentation — finding the two were out of sync roughly half the time. The company says it reduced that documented-versus-actual gap from approximately 50 percent to 5 percent. At Volkswagen, the German automaker, it is processing customer support tickets. And at Zurich Insurance, the Swiss insurer, Interloom won the company's Agentic AI Hyperchallenge in October 2025 — beating more than 2,000 AI-native entrants, per Jakobi — for an underwriting use case.
Underwriting is a useful stress test. An underwriting decision at a large insurer reflects accumulated institutional risk appetite: its history with specific brokers, its experience with particular product categories, and judgment calls that no general-purpose model can replicate from its training data alone. "The Zurich underwriter knows how their broker chat underwriting works much better than Accenture does," Jakobi told Fortune, taking aim at the consulting firms that have historically owned enterprise process work.
Those metrics — the 50-to-5-percent documentation gap reduction at Commerzbank, the 2,000-entrant competition win at Zurich — are entirely self-reported. There is no independent benchmark, no published methodology, no open-source implementation. Interloom's engineering blog discusses its approach to agentic planning — a feedback loop with human verification rather than straight ReAct-style execution — but stops short of publishing implementation internals. The technical specificity reads like engineers describing what they built, not marketing copy, but it still can't be externally verified.
For Air Street, this is a second lap around the same thesis. The firm previously backed Adept, a San Francisco-based startup that also framed itself around teaching AI systems to operate on behalf of humans in enterprise environments. That bet did not end cleanly: Amazon hired Adept's founders in July 2024 in a structured acqui-hire. Investors who had put roughly $414 million into Adept roughly recouped their money; the technology was licensed to Amazon; the company shell remained. Air Street's investment memo for Interloom's 2024 seed round explicitly calls out Interloom's rejection of "piling more layers or plug-ins on top of legacy technology" — a line that reads differently now that one well-capitalized prior bet on the same problem ended with its founders at a hyperscaler.
The new investors frame their conviction in structural terms. "An agent is only as good as the expert decisions it can rely on," said Guy Ward Thomas, a partner at DN Capital. Mehmet Atici of Bek Ventures adds another data point: Bek previously backed UiPath, the New York-based automation software company that led the prior wave of robotic process automation (RPA). RPA hardcoded workflows and worked until a process deviated from the script. "We've seen automation's transformative potential firsthand and we believe AI is now unlocking a new wave of rapid adoption in the enterprise," Atici said.
There is a demographic tailwind Jakobi did not invent but is happy to cite. Roughly 10,000 Baby Boomers retire daily in the United States, and with them goes decades of undocumented institutional knowledge — arriving precisely as enterprises try to deploy AI agents to do what those people used to do. The window for capturing that knowledge before it walks out the door is, in theory, closing in real time.
Interloom is not operating in an empty field. OpenAI, ServiceNow, and Microsoft all have agent management and context products in various stages of development, and almost every enterprise software vendor has some variation of "agents need organizational context" on their roadmap now. Jakobi identifies his biggest competitor as inertia — the assumption inside large enterprises that the next decade will look like the last. He is not wrong that it's the hardest thing to sell against, but the competitive ceiling is also every large platform vendor deciding this problem is important enough to solve themselves.
Interloom's next product push is what it internally calls a "Chief of Staff": a management layer designed to give managers real-time visibility into how their AI agents are performing, with version control for agent-driven processes. The framing treats agent process like software — with all the version-control discipline software engineering developed for good reason. Whether that layer becomes a durable moat or a feature that every agent platform ships by default is the question worth watching.
Jakobi built this ground before. His previous company, Boxplot, built AI-augmented knowledge graphs for enterprise customers and bootstrapped to Fortune 500 clients before Hyperscience, a New York-based AI software company focused on document processing, acquired it in 2021. The context graph architecture Interloom is now building looks like a direct evolution of that work — knowledge graph infrastructure, reoriented toward agent context rather than document extraction. That lineage gives the technical claim more surface to stand on than a clean-sheet story would.

