Salesforce Now Runs Its AI Agents on Infrastructure It Doesn't Own
Salesforce and seventeen other enterprise software companies are building their AI agents on NVIDIA infrastructure — and the real story is the dependency graph NVIDIA is constructing underneath them. On March 16, Salesforce announced it had integrated NVIDIA Nemotron models into its Agentforce p...

image from GPT Image 1.5
Salesforce and seventeen other enterprise software companies are building their AI agents on NVIDIA infrastructure — and the real story is the dependency graph NVIDIA is constructing underneath them.
On March 16, Salesforce announced it had integrated NVIDIA Nemotron models into its Agentforce platform as part of a broader partnership embedded in NVIDIA's Agent Toolkit, the open-source platform NVIDIA unveiled at GTC 2026. The announcement made Salesforce one of seventeen enterprise software companies — alongside Adobe, SAP, ServiceNow, Siemens, CrowdStrike, and fourteen others — signing on to build their next generation of AI products on a stack NVIDIA designed, optimizes, and maintains.
The partnership gives Salesforce customers access to Nemotron 3 Nano, an open Mixture of Experts model with a 1 million token context window, inside Agentforce. Agents can reason across long customer histories, large documents, and multi-step workflows without losing context. For regulated industries — financial services, healthcare — the models can run on NVIDIA AI infrastructure inside on-premises or private-cloud environments, keeping processing within the organization's own security boundary.
But the more consequential infrastructure piece for agent builders is NVIDIA OpenShell.
OpenShell is the infrastructure that matters
OpenShell is an open-source runtime that enforces policy-based security, network, and privacy guardrails around autonomous agents — what NVIDIA calls "claws." Enterprise IT has been the gating function on agent deployment: give an AI system access to corporate systems and the conversation stops at "but what if it does something wrong?" OpenShell is NVIDIA's answer: a sandbox that restricts what agents can access, where they can send data, and what actions they can take — with guardrails enforced at the runtime level, not bolted on afterward.
NVIDIA built OpenShell in collaboration with Cisco, CrowdStrike, Google, Microsoft Security, and TrendAI, which means the security industry is being enlisted as a validation layer rather than competing with NVIDIA's approach. That matters. If OpenShell becomes the de facto agent security runtime, those vendors' tools become prerequisites for enterprise sales — creating a mutually reinforcing ecosystem around NVIDIA's foundation.
"OpenShell gives autonomous agents the access they need to be productive while enforcing policy-based security, network and privacy guardrails," according to NVIDIA's announcement. Whether that holds under adversarial conditions in production environments remains the open question every enterprise IT team will ask before signing a check.
The cost architecture that could unlock enterprise adoption
The other piece enterprise buyers will parse closely is AI-Q, NVIDIA's open blueprint for agentic search. Its hybrid architecture routes complex orchestration to frontier models while delegating research tasks to Nemotron's open models — a design NVIDIA claims cuts query costs by more than 50 percent while maintaining top-tier accuracy. NVIDIA used AI-Q to build what it says is the top-ranking agent on both the DeepResearch Bench and DeepResearch Bench II leaderboards.
That cost claim is significant. Enterprise AI adoption has stalled not because the technology doesn't work in pilots but because it doesn't pencil out at scale. An agent that costs forty cents per query to reason across a five-hundred-page legal contract is fine in a demo; it's a budget nightmare when deployed to ten thousand knowledge workers handling customer records around the clock. If the hybrid architecture genuinely delivers fifty-plus percent cost reduction versus frontier-only approaches, it's the economic unlock that moves agents from pilot to production.
The Salesforce reference architecture spells out how the layers fit together in practice: Slack as the collaboration layer, Slackbot as the coordination layer receiving user requests and triggering Agentforce workflows, Data 360 as the trusted enterprise context layer, and NVIDIA AI infrastructure and Nemotron models as the processing layer. A request originates in Slack, triggers an Agentforce workflow, reasons over Data 360 context, invokes Nemotron-powered processing, and returns recommendations into the conversation. No diagram required — the architecture is coherent and deployable.
For a regulated financial services company, the example given is a compliance agent reviewing transactions, applying policy rules, and surfacing risk signals in a secure Slack channel. For healthcare, a case history summarization agent enforcing data access controls defined within Salesforce. These are concrete use cases, not vaporware scenarios.
What the wire missed
The Yahoo Finance summary treated this as a bilateral Salesforce-NVIDIA partnership. It is not. It is an ecosystem announcement. NVIDIA is building the shared substrate — models, runtimes, security frameworks, optimization libraries — and then charging enterprise software vendors to build on top of it. As VentureBeat reported from GTC, the implication Jensen Huang left unstated: "as AI agents proliferate across the corporate world, they will generate demand for Nvidia GPUs not because companies choose to buy them but because the software they depend on was engineered to require them."
The wire also missed Jensen Huang's direct invocation of OpenClaw and Claude Code. Speaking at GTC, Huang said: "Claude Code and OpenClaw have sparked the agent inflection point — extending AI beyond generation and reasoning into action." That quote positions OpenClaw — and the broader open-source agent runtime ecosystem — as a formative force in the shift he is now industrializing for enterprise. For builders in this space, that is the acknowledgment from the most influential AI infrastructure company on the planet that the agent era is here, and that the infrastructure layer matters as much as the models.
What to watch next
The proof is in production. Every enterprise partner NVIDIA signed will spend the next six months building proof-of-concept deployments. The OpenShell security model needs adversarial testing in real enterprise environments before anyone calls it enterprise-grade. The AI-Q cost claims need independent validation on the DeepResearch Bench leaderboards — those benchmarks are synthetic, and enterprise workflows are messier.
For agent infrastructure builders, the interesting question is not whether Salesforce ships agents — it will. It is whether OpenShell becomes the standard security runtime that other agent frameworks adopt, or whether it stays NVIDIA-specific. An open-source runtime that only runs well on NVIDIA hardware is open source in name. The real infrastructure story is what happens if OpenShell gets ported to other accelerator targets — or if the open-source community forks it.
Sources: NVIDIA Newsroom | Salesforce News | VentureBeat

