Last October, within hours of announcing a new Expert Council on Well-Being and AI, Sam Altman posted on X that ChatGPT would soon allow erotica. The council, per a Wall Street Journal investigation, had not been consulted. The timing was, as Decrypt noted, "at minimum, a contradiction."
That sequence — form an advisory body, then announce the thing they're supposed to advise on — has defined how OpenAI has handled its adult mode rollout ever since. The council met in January. Its eight members, drawn from Harvard, Stanford, and Oxford, made their position clear: the plan was a bad idea. One member warned that OpenAI risked building what they called a "sexy suicide coach," invoking the deaths of users who had formed intense emotional attachments to chatbots before taking their own lives.
OpenAI's response, according to the Journal, was to tell the council it was delaying the feature. Not canceling it.
What OpenAI Is Actually Building
The product is narrower than the discourse around it might suggest. According to an OpenAI spokesperson quoted by the Journal, adult mode would allow verified users to have text-based erotic conversations — what the company itself described as "smut rather than pornography." No erotic images. No voice. No video. Text only.
Altman announced the direction publicly in October 2025, framing it as a matter of principle: the company would be "allowing more user freedom for adults" and "treating adult users like adults." He added, in a separate post on X, "We aren't the elected moral police of the world."
That framing is a coherent position. The problem is the execution.
The Age Verification Problem
OpenAI's plan to gate adult mode behind age verification has, so far, not worked. The Wall Street Journal, citing unnamed sources, reported that the company's age-prediction system — which infers a user's age based on conversation patterns, topics discussed, and times of day — was misclassifying minors as adults roughly 12% of the time.
That number is what killed the December 2025 launch. Fidji Simo, OpenAI's CEO of Applications, acknowledged the delay during a December briefing, saying the company wanted to take the necessary time to get adult mode "right." The Q1 2026 launch date has since also passed without the feature shipping. OpenAI told Decrypt in mid-March that it has no updated timeline.
To understand what a 12% error rate means at scale: ChatGPT has approximately 900 million active users, according to OpenAI's own figures. ChatGPT's minimum age is 13. Even conservative assumptions about what fraction of that user base is under 18 produce large absolute numbers of potentially misclassified teenagers. The council's warning that "children would find ways around age restrictions" wasn't speculative — the company's own internal testing had already demonstrated it.
The Executive Who Opposed It and Was Fired
Ryan Beiermeister, who served as OpenAI's vice president of product policy, was terminated in January 2026 following a leave of absence, the Wall Street Journal first reported in February. According to the Journal, Beiermeister had, before her firing, expressed opposition to the adult mode rollout and raised concerns specifically about OpenAI's guardrails against child sexual abuse material and about the inadequacy of age restrictions keeping teenagers out of adult-mode chats.
OpenAI said the firing was unrelated to her policy positions — that she had been terminated following a sexual discrimination complaint from a male colleague. Beiermeister, according to the Journal, called the discrimination allegation "absolutely false."
The company denied any connection between her termination and her stance on adult mode. Her former colleagues, speaking to the Journal without attribution, said otherwise.
OpenAI did not respond to requests for comment beyond what it provided to the Journal.
A Council Assembled to Be Ignored?
The Expert Council on Well-Being and AI was announced in October 2025, explicitly tasked with defining "what healthy interactions with AI should look like for all ages." It was not a regulatory body. It had no formal authority to block product decisions. When it convened in January and delivered a unanimous verdict against the adult mode plan, the company's response was, in effect: noted.
This is the governance structure OpenAI chose to build, and it worked exactly as designed — the council advises, the company decides. That's not inherently dishonest. Advisory councils at major institutions routinely see their recommendations shelved.
What is worth scrutinizing is the framing. OpenAI announced this council as a meaningful step toward responsible AI development for all ages, then announced — within hours, on the same day — a product the council would later unanimously oppose. The sequencing makes the council look less like a deliberative body and more like a liability shield assembled after the decision had already been made.
"This seems part of the usual pattern of move fast, break things, and try to fix some things after they get embarrassing," an AlgorithmWatch spokesperson told Decrypt when the council was first announced. That assessment has aged well.
The Competitive Reality
OpenAI isn't operating in a vacuum. Elon Musk's xAI already markets Grok with a companion mode featuring highly sexualized AI personas. Character.AI built its user base substantially on AI romance and companionship, and is now facing lawsuits — including one involving Sewell Setzer, a 14-year-old who died by suicide after a period of intense chatbot engagement that included explicit exchanges. Meta's AI has been investigated for engaging in sensual conversations with minors. Open-source models run locally with no guardrails at all.
The competitive pressure on OpenAI is real, and Altman isn't wrong that abstaining from the adult content market while competitors move in has costs. From a pure product-strategy standpoint, the argument for adult mode is coherent: adults exist, they have appetites, someone is going to serve them, it might as well be the company with the largest safety team.
But OpenAI's exposure is also categorically different from its competitors. It has the largest user base by a significant margin. Its brand is more tightly coupled to the mainstream public's perception of AI than any other company. And unlike an open-source model running on someone's laptop, it has centralized, attributable accountability when something goes wrong.
Altman's "we aren't the moral police" framing works in a world where the age-verification system works. It's substantially harder to defend in a world where the company's own engineers documented a 12% misclassification rate and the feature has already missed two announced ship dates.
What Comes Next
As of this writing, there is no public timeline for adult mode's launch. OpenAI told Decrypt it had nothing to add to the Journal's reporting. The feature has missed two announced windows. The executive who raised the loudest internal objections has been fired. The council that advised unanimously against it has been, as far as the public record shows, overruled.
The story here isn't really about erotica. It's about how OpenAI processes safety dissent — from the advisory bodies it creates, from the executives it employs, and from its own engineering teams when test results don't support the launch plan. On all three counts, the pattern visible in the Journal's reporting is the same: the dissent is noted, the timeline slips, and the direction holds.
Sam Altman says he wants to treat adults like adults. Right now, the company can't reliably tell them apart from children. That's not a philosophical objection. It's an engineering fact his own teams have documented, and it's the reason the feature keeps not shipping.
This article synthesizes original investigative reporting by the Wall Street Journal, with additional reporting from Decrypt, CNET, and AP News. The Wall Street Journal first reported the advisory council's unanimous January opposition, the "sexy suicide coach" warning, the age-verification error rate, and Ryan Beiermeister's termination and its disputed connection to her stance on adult mode.