The rules for the AI healthcare market do not yet exist. But the people who will write them already know exactly who they protect.
On an American Hospital Association panel in April 2026, two of the most powerful figures in American healthcare sat on the same stage: Jonathan Perlin, CEO of the Joint Commission, the body that accredits more than 22,000 hospitals and whose standards Medicare requires them to meet; and Ladd Wiley, head of public policy at Epic, the dominant electronic health record vendor in American hospitals. Also present: the AHA board chair. The topic was AI governance in healthcare. According to the American Hospital Association, which published an article about the event the same day, what happened on that stage was a preview of who will write the rulebook for an AI healthcare market that does not yet exist, and who will be left out of the room when it gets written.
The Joint Commission is no obscure standards body. It accredits more than 22,000 healthcare organizations and holds Medicare deeming authority under federal regulation, meaning hospitals that want to bill Medicare must meet its requirements. That regulatory moat is precisely why the RUAIH framework, short for Responsible Use of Artificial Intelligence in Healthcare, carries weight that a white paper from a startup advocacy group never could, according to Mosaic Life Tech, a healthcare governance consultancy that tracks Joint Commission policy. The framework was released in September 2025 by the Joint Commission and CHAI, the Coalition for Health AI, a standards body formed by major health systems and academic medical centers. The framework is not standing still. The Joint Commission and CHAI have outlined a three-stage roadmap: advisory guidance, governance playbooks with implementation detail, and a voluntary AI certification program. Each step ratchets up the specificity of the requirements a health system must satisfy before deploying an AI tool. The certification milestone is the critical one. When a body with Medicare deeming authority offers a voluntary certification, the voluntariness is largely theoretical. Hospitals that deviate from certified practice expose themselves to accreditation risk. That is how voluntary standards become market entry requirements.
The political economy is not hard to read. In April 2026, Ladd Wiley, head of public policy at Epic, the dominant electronic health record vendor in American hospitals, appeared on an American Hospital Association panel alongside Jonathan Perlin, CEO of the Joint Commission, and the AHA board chair. If the Joint Commission shapes AI governance standards and Epic participates in designing those standards, Epic's existing EHR integration becomes a competitive moat. Any AI tool that wants to operate inside an Epic-integrated hospital must either conform to those standards or convince the hospital to accept dual compliance burdens. Incumbents write the compliance manual, then point to it as proof that newcomers are noncompliant.
This dynamic is visible at the state level too. In 2025, 47 states introduced more than 250 bills related to AI in healthcare, according to Healthcare Brew, which tracks policy using the Manatt Health AI policy tracker. Thirty-three of those bills became law in 21 states. The legislative volume is real, but the direction is legible: a significant share of these laws mandate human review for AI-generated clinical output. The stated purpose is patient protection. The structural effect is preserving specialist bottlenecks. When every AI finding requires a physician sign-off, you have not automated the clinical workflow. You have added a compliance step to it. That is not a bug if you are a physician practice or a hospital system that profits from the current labor model. It is a feature.
At the federal level, the landscape is fragmented in a way that favors the organized over the novel. The FDA is wrestling with the fact that its traditional medical device regulatory paradigm was not designed for adaptive AI and machine learning technologies. CMS is running the WISeR model, which stands for Wasteful and Inappropriate Service Reduction and pilots AI and machine learning to support prior authorization and utilization review in traditional Medicare. The CDC has AI initiatives. NIST publishes frameworks. No single federal authority owns the AI healthcare governance question. That vacuum is precisely what the Joint Commission and CHAI are filling, not because Congress summoned them to do so, but because they moved first and because they have the regulatory standing to make their standards matter.
The most telling regulatory signal is one of absence. HTI-5, the latest federal health IT interoperability rule, removed requirements for AI model cards, which are the transparency documentation that would have let buyers and regulators inspect how a model was trained and what it was optimized for. The removal was framed as reducing administrative burden. The practical effect is that hospitals deploying AI tools face less federally mandated transparency, while simultaneously facing more Joint Commission guidance on governance domains. Accountability has migrated from a federal transparency requirement to an organizational governance structure that the Joint Commission defines. That is a very different accountability, one that is mediated through an institution hospitals already know and vendors already have relationships with.
The kill-if-false on this story is straightforward: if the Joint Commission and CHAI developed the RUAIH framework with genuine, substantive input from patient advocates, independent AI researchers, and AI-native startups, input that shaped the standards rather than merely rubber-stamped them, then the protectionist reading collapses. Nothing in the public record suggests that happened. The framework reads like a governance document written by organizations with existing regulatory authority and existing vendor relationships. The panel that includes Epic's public policy executive alongside the Joint Commission CEO is documented by the American Hospital Association. The three-stage roadmap that ends in certification is a matter of public record. The 47 states legislating in the absence of federal clarity is a matter of public record.
None of this proves bad intent. The Joint Commission and CHAI argue, and the argument is credible on its own terms, that frameworks like RUAIH enable safer AI adoption and protect patients from harm. They are not wrong that AI deployment without governance produces real risks. Good governance matters.
But governance written by the parties with the most to gain from a particular market outcome is not neutral. The question is not whether RUAIH makes healthcare AI safer. It may. The question is whether the standards being hardened right now will make the healthcare AI market more competitive or less, whether they will lower barriers to entry for novel tools or raise them, whether startups and independent researchers will be in the room the next time these standards are revised, or whether the room will look the same as it did in April 2026.
The window to answer that question is not infinite. Certification programs, once established, are extraordinarily difficult to un-establish. The organizations that write the playbook tend to be the ones who stay in the room. The 22,000+ organizations that the Joint Commission accredits already know how to work with it. A startup building AI triage tools does not yet have that relationship. That asymmetry is the story.