Mount Sinai Health System is rolling out OpenEvidence, the AI-powered medical search platform, to every clinician who touches a patient — not just its physicians, but also its nurses and pharmacists across seven hospitals and more than 400 outpatient practices, reaching its full workforce of 48,000 employees. The deployment, announced March 31, is the first enterprise-scale rollout of OpenEvidence to extend beyond doctors to the full care team. Daniel Nadler, the Harvard PhD who founded the company after selling the AI firm Kensho in 2018, called it the new baseline for what a modern hospital looks like.
The announcement puts numbers on a transition the industry has been gesturing toward for two years. OpenEvidence, which is free to physicians and supported by advertising and health system contracts, is now used daily by more than 40 percent of U.S. physicians across more than 10,000 hospitals and medical centers, according to the company. In December 2025 alone, the platform supported roughly 18 million clinical consultations, up from about 3 million per month a year earlier, FierceHealthcare reported. The company topped $100 million in annual revenue last year and raised roughly $700 million across three funding rounds in 2025 — including a $250 million Series D in January that doubled its valuation to $12 billion.
Mount Sinai is not the company's first large health system customer. Sutter Health, which serves more than 3.5 million patients across a network of hospitals in Northern California, has been using OpenEvidence since 2024. But the Sutter rollout, like most enterprise deployments before it, was aimed at physicians. The Sinai deal is explicitly designed to go further.
"At Mount Sinai, we prioritize innovation that solves core clinical problems and scales across the entire delivery system," said Nicholas Gavin, vice president and chief clinical innovation officer of Mount Sinai Health System, in the announcement. Girish Nadkarni, the system's chief AI officer and chair of the Windreich Department of AI and Human Health at the Icahn School of Medicine, framed the deal as part of a broader vision: "democratizing access to the latest clinical evidence for every member of the Mount Sinai care team" and reducing the cognitive burden on clinicians so they can "focus on what matters most — the patient."
The distribution question is real. Clinical decision-support tools have historically been physician-centric — designed for the person with prescribing authority and medical school training, then left to propagate downward through informal channels if at all. A nurse checking a drug interaction or a pharmacist verifying a dosing guideline typically works from memory, package inserts, or institutional references that may not reflect the latest literature. OpenEvidence's pitch is that the same AI-generated synthesis available to the doctor should be available to everyone in the room.
Embedded in Epic, the dominant U.S. hospital electronic health record system, the platform will let clinicians ask natural-language questions and receive answers drawn from peer-reviewed literature and clinical guidelines — without leaving the chart. That workflow integration is the product decision that separates it from physicians who might use a general AI chatbot as a workaround.
The competitive picture makes the stakes plain. Epic, the EHR vendor, is building AI capabilities directly into its own workflow. OpenAI and Anthropic are explicitly targeting healthcare as a market. Nadler has been blunt about what he thinks of that competition: "Our view is that healthcare cannot be a side hustle," he told FierceHealthcare. "OpenAI has the unseat Google division, the unseat Apple division, the unseat Nvidia division and the unseat OpenEvidence division. We have one division. We wake up every morning thinking about healthcare." The argument is that depth of specialization — not model size — is what wins in a regulated, high-stakes domain like medicine.
Mayo Clinic appears to agree. The Rochester, Minnesota health system is both an investor in OpenEvidence and a customer through the Mayo Clinic Platform Accelerate program — a combination that gives the startup something unusual in healthcare AI: a top-tier academic medical center with skin in the game on both sides of the table.
Mount Sinai has built substantial AI infrastructure of its own. The system opened the Hamilton and Amabel James Center for Artificial Intelligence and Human Health on its Manhattan campus and established what it calls the first dedicated AI department at a U.S. academic medical center. That organizational seriousness matters when evaluating whether a deployment like this will actually change clinical behavior or become another tool that clinicians ignore.
There are open questions worth watching. Scaling a physician-grade AI reasoning tool to nurses and pharmacists means dealing with a wider range of clinical workflows, different liability frameworks, and users who may have less training in interpreting probabilistic or hedged medical language. Whether the outputs that satisfy a physician will reliably satisfy a pharmacist or a registered nurse is a different question — one that the company and the health system will be answering in real time as the deployment rolls out.
The $12 billion valuation is doing a lot of work in the background. A company that is free to physicians and supports itself through health system contracts and advertising carries a different risk profile than a SaaS business with clear enterprise pricing. The revenue milestone — $100 million — is real, but the gap between $100 million and $12 billion requires a story about scale that the Sinai deployment, significant as it is, doesn't fully resolve on its own.
What the deal does confirm is that the model has moved past the pilot stage. OpenEvidence is no longer asking health systems to experiment with AI-assisted clinical search. It is asking them to make it standard operating procedure for an entire care team across a 48,000-person health system — physicians, nurses, and pharmacists all included. If that works, it changes what "standard" means. If it doesn't, it will be a useful data point about the limits of workflow embedding in complex clinical environments.