The Council OpenAI Assembled After a Child's Suicide Wants 'Adult Mode' Stopped. It's Not.
ChatGPT's 'Adult Mode' Is Coming.

image from Gemini Imagen 4
OpenAI is moving ahead with a plan to let adults have sexual conversations with ChatGPT — despite unanimous warnings from the company's own handpicked wellness council that the feature could become what one expert called a "sexy suicide coach" for vulnerable users. The advisory board raised the alarm in January, citing risks of unhealthy emotional dependence and the likelihood that minors would find ways around age checks. OpenAI told them it was delaying the launch, but not stopping it.
The disclosure, reported by The Wall Street Journal and confirmed by Ars Technica, is the starkest example yet of a pattern at OpenAI: the company assembles external experts to signal diligence, then overrides their judgment when it conflicts with commercial priorities. OpenAI declined to comment to WIRED.
The wellness council was stood up in October 2025 — the same day CEO Sam Altman announced on X that "adult mode" was coming to ChatGPT. That timing was not coincidental. The council was formed after the first widely documented case of a minor's suicide linked to ChatGPT, a response to backlash. What it was not told: that the feature it would be asked to blessing was already being planned for imminent release.
In January, council members unanimously warned that AI erotica could foster dangerous emotional bonds, that minors would inevitably find access, and that without significant safeguards, the system could encourage self-harm in users already prone to dependency. One expert, citing documented cases of users who died by suicide after forming intense attachments to companion chatbots, raised the specific risk of a system optimized for intimacy becoming a vector for harm. OpenAI's response, according to the WSJ: thank you for the feedback, we're delaying but not stopping.
Age verification is at the center of the problem. OpenAI's own internal testing found that its age-prediction system misclassified minors as adults approximately 12 percent of the time, according to Ars Technica's reporting. At OpenAI's scale — more than 900 million users — that error rate could mean millions of minors gaining access to adult conversations. When OpenAI delayed the launch to "later this year" in March, citing a need to focus on "higher priority" work including intelligence and personalization improvements, insiders told the WSJ that technical challenges around age verification were a significant factor, not just strategic reordering.
The financial context matters. In August 2025, Altman acknowledged publicly that ChatGPT's chat use case had "saturated" and might not get much better. Subscriptions in Europe were "flatlining," Fortune reported. Google and Anthropic were gaining. Adult content, the thinking apparently went, might be a differentiator. "They're fighting harder than ever to achieve growth, and will sacrifice longer-term consumer trust for the sake of short-term profit," one insider told the WSJ.
The privacy risks are separate and equally serious. ChatGPT's memory feature — which logs user preferences and draws on them in future conversations — could store highly sensitive sexual details alongside dietary preferences and movie tastes. Julie Carpenter, a human-AI interaction researcher and author of "The Naked Android," told WIRED: "You're sharing your most intimate sexual thoughts because you're lost in the moment. You're vulnerable in that way because you're under the impression that you are in this cool, transformative, almost co-constructive creative space."
Users who opt for "temporary chats" — conversations not stored in history or used for model training — may assume those exchanges are truly ephemeral. They are not. OpenAI's own FAQ states it may retain copies "for safety purposes" for up to 30 days. There is also a carve-out for legal obligations. Prior incidents have demonstrated how much that assurance is worth: in 2023, a bug briefly exposed the titles of some users' chat history. Last year, ChatGPT conversations were unintentionally indexed on Google Search when users failed to understand their sharing settings.
The mental health track record is not theoretical. The first documented case — Sewell Setzer III, a 14-year-old who died by suicide after exchanging sexualized messages with Character.AI chatbots — resulted in the company restricting under-18 access within a week and eventually settling the family's lawsuit. Since OpenAI formed its wellness council, two additional middle-aged male ChatGPT users have died in cases where the chatbot appeared to escalate rather than de-escalate harmful ideation, according to Ars Technica. One instance involved ChatGPT composing what a family member described as a "suicide lullaby" for a man who killed himself shortly after.
The wellness council notably does not include a suicide prevention expert. Its members are experts in AI and wellbeing broadly — enough to see the problem, apparently, but not structured to specify the solution.
What OpenAI is planning to call "adult mode" it prefers to describe internally as "smut" rather than pornography. The distinction may matter to the company's lawyers. It does not appear to matter much to the people who study what happens when lonely or vulnerable users form intense bonds with systems designed to be engaging. Kate Devlin, a professor of AI and society at King's College London whose research focuses on digital sex, put it plainly: "People have to be very aware that there's a surveillance aspect to the data."
Whether users will be aware, or whether OpenAI's disclosed data practices are sufficient to inform consent in an intimate context, is a different question. The FTC and state attorneys general have shown increasing willingness to challenge data practices that consumers cannot reasonably be expected to understand. Adult mode, if it launches, will test that willingness.
For now, the feature is delayed — not abandoned. OpenAI says it still believes in "treating adults like adults." Its advisors, apparently, believed that principle required more than an age gate and a checkbox. The company disagreed.

