When 50 million Americans are managing anxiety, depression, or OCD and fewer than 250,000 clinicians exist to help them, someone is going to build the machine that tries to fill the gap. That someone is Jimini Health, a New York-based mental health AI startup that raised $17 million in seed funding this week — bringing its total haul to $25 million — and that is betting everything on a four-word distinction that will determine whether it becomes a durable company or a cautionary tale.
The distinction is "clinical team member."
Jimini's AI product, Sage, is not a chatbot-therapist, the company says. It is a clinically supervised AI assistant that supports behavioral health clinicians and their patients between sessions — generating notes, tracking homework compliance, flagging warning signs, handling the administrative work that burns therapists out and shortens the time they spend with people who need them. That framing is not accidental. It is the regulatory architecture.
The question is whether it holds.
The gap Jimini is trying to fill is real and well-documented. The U.S. has roughly one clinician for every 200 people with anxiety, depression, or OCD, according to a technical blueprint the company released last summer. Demand for care has never been higher; the supply of clinicians has never been more strained. A clinical support AI that makes therapists more productive — seeing more patients, doing less paperwork — could theoretically expand access without requiring a single new license. That is the pitch.
What makes Jimini interesting, though, is not the technology. It is the care with which the founders have thought about where the regulatory line sits — and how close they can sail to it without crossing.
Woebot Health, once the best-funded name in digital mental health, shut down in mid-2025 after failing to navigate exactly this problem. Its chatbot served roughly 1.5 million people over the years and was built on pre-scripted responses before the LLM era. But when the company tried to update its product with more capable language model technology, it ran into a regulatory wall the original architecture had been designed to avoid. Alison Darcy, Woebot's founder, said it plainly at the time: AI was moving faster than the FDA could regulate it. She was not wrong. The mental health chatbot market was projected to reach $11 billion in 2025. The FDA had cleared more than 1,200 AI-based medical devices through the 510(k) and De Novo pathways. It had not authorized a single generative AI device specifically indicated for mental health.
The FDA's position, articulated in a November 2025 Digital Health Advisory Committee meeting and subsequent regulatory filings, draws a hard line: apps designed to provide therapy for diagnosed psychiatric disorders require premarket review. General wellness apps do not. The distinction hinges on intended use — and on two specific technical vulnerabilities the agency has flagged in AI mental health tools. The first is sycophancy: the tendency of generative AI systems to produce responses that align with or please the user, even when those responses are inaccurate. The second is metacognition limitations: the system's inability to recognize when a user's input does not fit any known category. Both are particularly dangerous in a mental health context, where a patient might describe suicidal ideation in terms an AI can pattern-match to something benign, or where an AI genuinely trying to help might accidentally reinforce a distortion rather than challenging it.
Jimini's answer to these failure modes is Bill Hudenko. He is Jimini's chief clinical officer, a professor at Dartmouth and a pioneer in continuous psychotherapy — a model premised on the idea that mental health care does not end when the session ends, and that the space between sessions is where the hardest clinical work happens. That background is the point. Hudenko is not a technologist who learned enough clinical language to be convincing. He is a clinician who has spent his career thinking about what happens in the therapeutic relationship when no clinician is physically present.
The other senior scientist is Johannes Eichstaedt, a professor of psychology at Stanford University and Jimini's chief scientist. Eichstaedt's research sits at the intersection of digital phenotyping and mental health — using data from how people behave online and via smartphone to infer psychological states. It is serious academic work, and it gives the company a kind of clinical credibility that a pure AI startup cannot claim.
On the AI side, the company has added Dr. Pushmeet Kohli, Google's DeepMind vice president of science, and Dr. Seth Feuerstein, who runs Yale's Center for Digital Health and Innovation, to its advisory board. The DeepMind name carries weight in AI circles; the Yale center carries weight in regulatory ones.
The co-founders are Luis Voloch as CEO, Mark Jacobstein as president, and Sahil Sud as chief product officer. Voloch previously co-founded Immunai, an AI-driven cancer immunotherapy company valued at over $1 billion. He is not a first-time founder, and the pitch deck reflects that.
In fact, the pitch deck contains a quote that is remarkable precisely because it is not the kind of thing founders put in pitch decks unless they are either very confident or very honest — possibly both.
"Other startups may make those moves faster," Voloch told potential investors, according to Business Insider, "and one of them is going to end up on the front page of The New York Times with a disaster story that could've been prevented."
That sentence is doing real work in this story. It tells you Voloch knows the failure mode. It tells you he is not naive about the regulatory and clinical risks. It also tells you he believes the company that avoids the disaster is the one that survives — and that he thinks positioning matters as much as product. Which is, again, the four-word distinction: clinical team member.
Whether Sage actually qualifies as that rather than a chatbot-therapist is not a question the funding announcement answers. It is the question the story will eventually answer, probably in a regulatory proceeding or a courtroom rather than a press release.
The July 2025 white paper is an attempt to answer it on paper. Whether the product lives on the right side of the line is the thing regulators, clinicians, and investors will eventually have to judge — and not every company that draws the line correctly will survive the judgment of the market.
What Voloch told investors was that the disaster would be preventable. What he did not say — because he cannot yet know — is whether it is preventable by any individual company, or only by the industry collectively moving more slowly than the technology. Mental health AI is a space where the incentives of every player push toward speed and toward the appearance of safety rather than its substance. The FDA is trying to draw a line it has not finished drawing. The companies are building toward a line they hope stays where it is. The patients are in the gap between.
Jimini has the clinical credentials, the regulatory awareness, and the founder who said the quiet part out loud. Whether that is enough is the bet this funding round is placing.
The money is real. The gap is real. The line is not yet drawn.