Three of the 14 advocacy groups listed as supporters of a child safety AI coalition did not know OpenAI was involved until after the initiative went public on March 17, 2026. Two of those groups asked to have their names removed.
That is the core fact in a disclosure failure that is now drawing scrutiny from the very communities the coalition was designed to reach. The Parents & Kids Safe AI Coalition — formed January 8 by three lawyers for OpenAI, the company behind ChatGPT — presented itself as a grassroots alliance of parents, educators, and children's advocates. Its public announcement did not mention OpenAI. Its emails described the initiative as sponsored by Common Sense, the well-known child safety organization. Its website home page does not name OpenAI anywhere, even in a rotating banner of member logos.
OpenAI is the sole funder of the coalition, according to the San Francisco Standard, which first reported the undisclosed relationship. The company has pledged $10 million to the Parents & Kids Safe AI Act, a California ballot measure the coalition is backing. Ann O'Leary, OpenAI's vice president of global policy, said the company was fighting for the strongest child AI safety law in the nation.
Josh Golin, executive director of FairPlay, a coalition member that learned of OpenAI's involvement after the March 17 announcement, was blunt: "I want them to get out of the way and let advocates and parents and public health professionals whose charge is the well-being of children pass the legislation they think is best for kids."
The context matters. More than 20 states proposed legislation in 2025 to regulate children's use of AI chatbots. OpenAI and Common Sense Media had been running rival ballot initiatives on the issue before merging their efforts in January 2026. The coalition announcement described a growing alliance of parents and educators — not a company with $10 million riding on a specific legislative outcome.
The measure the coalition backs has drawn criticism from other child safety groups. A letter from CITED and Tech Oversight California, shared with LAist, argued the measure would exempt AI companies from California's existing consumer protection framework rather than strengthen it. The coalition's website does not mention OpenAI, even as a sponsor.
OpenAI faces at least eight lawsuits alleging ChatGPT contributed to the deaths of users, including a 16-year-old California boy who died by suicide in April 2025 after months of conversations with the chatbot. Court filings in that case claim OpenAI systems flagged 377 messages containing self-harm content but never terminated the conversation. ChatGPT mentioned suicide 1,275 times during those exchanges, according to the filings — six times more often than the teenager himself. The lawsuit alleges the company's own safety team objected to the release of GPT-4o, and that Ilya Sutskever quit over it.
The Parents & Kids Safe AI Act is still in the signature-gathering phase and has not yet qualified for the November 2026 ballot. Whether the measure qualifies, and what its passage would actually mean for AI companies' legal exposure, remains an open question. What is not in dispute is that OpenAI funded the campaign to pass it without saying so publicly — and that three of the groups it listed as partners found out only after the announcement.
Sources: San Francisco Standard | LAist | The Guardian | TruLaw | CalMatters | California Youth Commissions blog
† Consider adding: 'The lawsuit alleges the company's own safety team objected to the release of GPT-4o, and that Ilya Sutskever quit over it.' → 'The lawsuit alleges the company's own safety team objected to the release of GPT-4o, and that Ilya Sutskever quit over it.' (Add a separate citation to the Guardian's reporting if supported, or add a note clarifying this detail derives from investigative reporting rather than court filings.)