Three of the most plugged-in people in enterprise AI just put money on the same bet: that governance is the next category in agent infrastructure.
Sycamore, founded by former Atlassian CTO Sri Viswanath, announced a $65 million seed round Monday backed by an unusually specific group of angels. Bob McGrew, former OpenAI chief scientist. Lip-Bu Tan, Intel CEO. Ali Ghodsi, Databricks CEO. Coatue and Lightspeed led the round. None of these people are in the habit of writing seed checks to companies that do not yet exist. That is the story.
The company itself is pre-product. Sycamore is building what it calls the Trusted Agent Operating System — a platform for deploying and orchestrating AI agents inside enterprises with, in Viswanath's words, "security, governance, and human oversight built in." The architecture has a specific spine: agents earn autonomy through demonstrated reliability, moving from observation to action as they prove themselves trustworthy. Every operation is isolated, auditable, and governed from the start.
That progressive trust model is the interesting claim. Most enterprise agent deployments today are governed by the access controls built into the model provider's platform. Viswanath's bet is that enterprises will want a dedicated governance layer that sits across model providers, tracks agent behavior over time, and manages trust as a property that compounds. In short: a privileged access management system for AI agents, not just for humans.
The competitive context makes this concrete. Sycamore is not entering an empty market. OpenAI has Frontier, its enterprise agent management product. Anthropic has Cowork. Microsoft has Azure Foundry AgentCore. AWS has Bedrock AgentCore. Every major model provider is building agent orchestration as a feature of their platform. The question Sycamore is betting on is whether enterprises will treat a dedicated, provider-agnostic governance layer as necessary — or whether they will default to the controls their primary model vendor provides.
Coatue calls it a "BFI" — a Big F Idea. The phrase means they think the market expands the entire category, not just Sycamore's position in it. That framing is standard for a firm that wants to sound early rather than wrong. It is also possible they are right. Enterprises are not comfortable with how fast agent deployments are moving, and the compliance teams inside large organizations are several months behind whatever the engineering teams are shipping. A platform whose entire value proposition is "you will always know what your agents did and why" has a real problem to solve.
Viswanath spent over two decades building enterprise platforms at Sun, VMware, Groupon, and Atlassian, where he led the cloud transformation. At the time of his departure, Atlassian employed roughly 4,000 people globally; the engineering organization he ran was 7,000 — the larger figure reflects a subsequent growth phase that continued after he left. He left Coatue — where he was an investor — in the fall to start Sycamore. His argument from the trench is direct: every enterprise system today is built for humans doing the work. AI agents that can reason and act are the next major platform shift. The platforms beneath those agents were never designed to support the enterprise.
Sycamore says it is working with Fortune 500 companies but will not name them. The product does not yet exist in any accessible form — no pricing, no documentation, no public waitlist. The $65 million is intended to move agents "out of the lab and into production," per Viswanath's comments to SiliconANGLE. That is honest about the stage. It is also the point where the story shifts from infrastructure thesis to execution risk.
The angel list is the strongest evidence that the problem is real. McGrew, Tan, and Ghodsi have seen enough enterprise AI deployments to have opinions. They do not typically invest at the seed stage. When they do, and when they write checks alongside Coatue and Lightspeed for a governance-layer bet with no shipped product, they are voting on the operator and the timing, not the current state of the code. That is a legitimate bet to take seriously — and a legitimate reason to wait for evidence before treating it as solved.