Utah signed a secret contract with an AI company last October. The AI cannot prescribe anything without a physician's sign-off. Doctronic holds all the liability; the state holds none.
The full agreement, signed October 23rd and 24th, is now public. Its specific terms have not been previously reported. Doctronic bears all liability under the contract, which requires the company to hold Utah's Office of Artificial Intelligence Policy and its Division of Professional Licensing harmless from any claims, damages, or expenses arising from the AI's work. The state holds precisely zero liability. Patients who are harmed by the AI retain their right to sue Doctronic directly. The contract explicitly preserves that remedy, but the state has contractually insulated itself from any knock-on cost.
The contract also grants protection to any physician employed or contracted by Doctronic who "acts in reliance on Participant's artificial intelligence technology to facilitate the renewal of a prescription solely by being the named prescriber for such renewals and solely for such action, and does not interact directly with a patient or other health care provider." In other words: the physician does not have to see the patient. The AI surfaces the recommendation, the physician's name goes on the prescription, and that is legally sufficient under Utah's telehealth statute as modified by this agreement.
Dr. Robert Steinbrook, director of Public Citizens Health Research Group, put it directly: "AI is a software application, not a licensed physician or other medical professional. Decisions made by AI with human oversight are in practice decisions made by AI most of the time."
The OAIP's own director signed a document that is, by its own terms, not an endorsement of anything. The agreement states it "does not constitute an endorsement or approval from the State of Utah or any of its political subdivisions of Participant's use of artificial intelligence technologies." The state is running an experiment it has also explicitly declined to endorse.
The contract also permits Doctronic to request a single extension of up to 12 months, filed no later than 30 days before the initial period expires. That means the no-liability, no-robust-evidence period could extend to the end of 2027 with one letter to the OAIP.
The pilot, now in its fourth month, covers prescription renewals for chronic conditions: diabetes, hypertension, and high cholesterol. These conditions account for roughly 80 percent of all prescription activity in the United States. The price point is competitive: four dollars per renewal. Doctronic says it matched physician treatment plans in 99.2 percent of 500 urgent care test cases run before launch.
But the state says the program remains in Phase 1, meaning every single prescription request still requires authorization by a licensed medical practitioner before anything happens. The AI does not yet operate independently on any medication class. The Office of Artificial Intelligence Policy put it plainly in a status update published to its website around January 28th: "The Office does not yet have robust evidence on benefits, as most outcome measures are scheduled for later phases once safety is further established." The Office has received anecdotal reports of potential benefits: multilingual support, 24/7 availability, immediate feedback, and more thorough medical history collection. No serious safety incidents have been reported to the state. That is good news. But the outcome measures that would tell you whether this actually works better than a phone call to your pharmacy are not scheduled to arrive until later phases.
Meanwhile, Doctronic is already in conversations with Texas, Arizona, and Missouri about replicating the model. The company's co-founder told The Hill in January, with FierceHealthcare also identifying him as a company co-founder, that he expected a dozen other states to approve similar programs within 2026. That is a fast timeline for a program that has been running in its current form for roughly 90 days.
Lowell Schiller, who served as chief counsel at the Food and Drug Administration under the Biden administration, told type0 that the Utah arrangement represents a genuine regulatory gap, not because the state has done something illegal, but because no federal framework currently addresses AI systems as prescribing authorities. "The question is not whether this is allowed," he said. "It's whether the governance structure that allows it is adequate for the risk."
Dr. Oskowitz, Doctronic's co-founder, has argued that the AI can exceed human checks for routine tasks and errs on the side of safety. The company's internal benchmarking supports that claim for the specific cases tested. What remains unknown is whether the 500 urgent care cases Doctronic used to establish its 99.2 percent match rate are representative of the chronic medication renewal population that actually uses the program, and the OAIP has not yet published data that answers that question.
The OAIP agreement page notes that a third-party red-teaming process identified potential vulnerabilities under adversarial usage. The OAIP characterizes these vulnerabilities without detail. The red-teaming report itself has not been made public. This is not necessarily unusual for a sandbox program, but it means the public is being asked to evaluate an AI system's safety record while the most detailed safety assessment has not been published.
The OAIP is required to receive a comprehensive final report from Doctronic within 30 days of the agreement's termination or expiration. The agreement runs for 12 months from signing, which would be October 2026. If Doctronic exercises its single optional extension, the clock runs to October 2027.
The 12-month report is supposed to include an overview of the deployment, any incidents of harm to users, legal action filed against Doctronic as a result of the demonstration, and complaints filed with the state. Whether that report will be made public, and in what form, is governed by Utah's GRAMA records law. That is the same law under which this contract was eventually disclosed.
Dr. Steinbrook's conclusion, via Public Citizen: "The Food and Drug Administration should not look the other way while an AI system that has not been evaluated or authorized by the FDA identifies itself to the public as an AI doctor."
The OAIP has not responded to a request for comment on whether it has had any contact with the FDA about the program.
The robot says hello. Nothing else happens. That is the story, so far.