Utah's medical board wants an AI prescribing program shut down. Not paused, not reviewed — shut down.
The Utah Medical Licensing Board voted last week to demand the immediate suspension of Doctronic, an AI system that has been renewing prescriptions for 190 medications across the state since January — no physician involved, no prospective review, just an algorithm and a pharmacy. The board found out the program existed after it was already live. That is the confrontation. Not a safety debate, not a regulatory question — a state medical board discovering, after the fact, that someone has been writing prescriptions at population scale under their jurisdiction.
In a letter dated April 20, the board called it "a dangerous first step" and recommended immediate suspension. "The Medical Board was made aware of this agreement only after its implementation, once the system was already live and available for use," the letter states. The Department of Commerce, which oversees Utah's AI sandbox, has not responded. Doctronic declined to comment.
Doctronic works like this: a patient on a long-term medication — a statin, say, or an antidepressant — submits a renewal request. The AI reviews the chart, checks for contraindications against the patient's other prescriptions, and issues a new script. No physician touches the case. The pharmacist who receives it gets a note flagging it as auto-generated, subject to retrospective review — which is a polite way of saying the doctor sees it after the fact, when there is not much to do except argue with a prescription already written.
The company argument, made at launch by co-founder Dr. Adam Oskowitz, an associate professor of surgery at UCSF, is that prescription renewals are clerical busywork that crowd out time for actual medicine. The AI handles the routine stuff; doctors handle the complicated stuff. He cited internal data showing the AI matched physician treatment plans 99.2 percent of the time across 500 urgent care cases. The denominator is doing a lot of work there — 500 cases is roughly what a single busy urgent care clinic handles in a week. This is not a clinical trial. It is a company saying its product works.
The harder problem surfaced in March. Cybersecurity researchers at Mindgard published findings showing they had jailbroken Doctronic's public-facing health assistant using prompt manipulation techniques. In testing, they were able to triple an OxyContin dosage recommendation, mislabel methamphetamine as a common antibiotic, and generate false vaccine information. The techniques are documented in standard AI safety literature — no specialized knowledge required, just a willingness to read the available documentation. The point is not that Doctronic's system was specifically targeted. The point is that the vulnerabilities are not hidden. They are sitting in the published record, and a system handling real prescriptions had them anyway.
The regulatory architecture that made Doctronic possible is worth understanding, because it was engineered specifically for this outcome. Utah packaged the program as a prescription renewal tool rather than a diagnostic device — a framing designed to keep the system under state medical licensing authority rather than federal FDA jurisdiction. The FDA regulates medical devices, including AI systems making clinical decisions, under a framework that requires pre-market review. State licensing boards operate under different rules. Utah's sandbox, explicitly designed to attract AI companies by offering a permissive environment, provided the cover. The FDA has not formally contested Doctronic's legal position, but it has not endorsed it either — which means the question of whether an AI that issues prescriptions is a medical device remains, for now, unanswered.
What the board's letter makes explicit is the governance gap: Utah ran a prescribing program at scale without involving the institution whose names go on medical licenses. The 190 drugs covered include medications where a dosing error or a drug interaction has consequences. The board is not arguing that AI prescribing is inherently unsafe. Its position is narrower and more specific — that the people legally responsible for medical practice in the state should be consulted before a system starts generating prescriptions at population scale.
The outcome of this particular fight will not settle the broader question of whether AI prescribing works. It will settle something more immediate: whether a state sandbox can be used to run a prescribing program without physician oversight, without FDA review, and without the state medical board knowing until after the pharmacy has already filled the scripts. If the answer is yes — if Doctronic wins this round — then every other AI health company with a regulatory affairs team has just received a blueprint. If the answer is no, Utah becomes the cautionary tale that every other state sandbox points to when deciding whether "move fast and fix it later" is actually an acceptable framework when the thing being fast-tracked writes prescriptions.