Three of the 14 organizations listed as members of a California child safety coalition did not know they were backing an initiative funded entirely by OpenAI, and two asked to have their names removed after finding out, according to the San Francisco Standard.
The Parents & Kids Safe AI Coalition presented itself publicly as a broad alliance of parents and advocates when it launched on March 17, 2026. Its website named no corporate sponsor. Its emails described the effort as "sponsored by Common Sense." But OpenAI is the group's sole funder, and the coalition was formed on January 8 by three lawyers working for the company, the Standard reported.
Professor Tom Lyon, who studies corporate political influence at the University of Michigan, reviewed the coalition's public materials and called the operation textbook astroturfing, meaning the practice of manufacturing the appearance of grassroots support for an agenda that originates from a corporation or other interested party.
OpenAI pledged $10 million to the Parents and Kids Safe AI Act campaign, which it announced alongside Common Sense Media on January 9, according to a press release from the nonprofit. The two groups had previously backed competing ballot measures on child AI safety before merging their efforts behind the new legislation. Common Sense Media, the kids' advocacy organization founded by former Silicon Valley executive Jim Steyer, is the public face of the campaign. OpenAI's financial role was not disclosed in coalition outreach emails, which named only Common Sense as the sponsor.
"It's a very grimy feeling," one nonprofit leader told the Standard after learning of the undisclosed funding. "To find out they're trying to sneak around behind the scenes and do something like this, I don't want to say they're outright lying, but they're sending emails that are pretty misleading." The organization asked to have its name removed from the coalition's list of supporters. A second group made the same request after learning of OpenAI's role.
Josh Golin, executive director of the child safety organization FairPlay, which refused to join the coalition, was blunt: "I want them to get out of the way and let advocates and parents and public health professionals whose charge is the well-being of children pass the legislation they think is best for kids."
The Parents and Kids Safe AI Act would require AI companies to implement age verification and stronger safeguards for users under 18. But critics, including the organizations CITED and Tech Oversight California, argue the measure would shield AI companies from California's existing consumer protection framework rather than strengthen it. "The measure would exempt AI companies from the robust framework of laws already established in California to give consumers meaningful protections," LAist reported.
That puts it in direct conflict with a competing legislative package. Assembly Bill 2023 and Senate Bill 1119, introduced March 26 by Assemblymembers Rebecca Bauer-Kahan and Buffy Wicks and Senator Steve Padilla, would impose stricter requirements on AI chatbot operators without the exemption language that critics say benefits the industry. More than 20 states proposed legislation to regulate children's use of AI chatbots in 2025, according to the Standard.
The usage data gives the debate real weight. According to Common Sense Media, 72 percent of teenagers have used an AI companion chatbot, and more than half are regular users. The average teen user spends 93 minutes per day with AI chatbots, 18 minutes longer than on TikTok. OpenAI faces at least eight lawsuits alleging that ChatGPT contributed to the deaths of users, the Standard reported.
Ann OLeary, OpenAI's vice president of global policy, said the company was "fighting for the strongest child AI safety law in the nation." OpenAI did not respond to a request for comment from type0.
One dimension that has drawn additional scrutiny: IB Times UK reported that OpenAI chief executive Sam Altman is associated with ventures that provide age verification technology. The Parents and Kids Safe AI Act mandates age verification statewide. If that connection is independently confirmed, it would mean the campaign is simultaneously lobbying for a law and creating market demand for technology in which Altman's portfolio holds a stake, a potential conflict of interest the publication flagged. The verification-tech stake warrants independent confirmation before being treated as established fact, but the coalition's lack of transparency about its funding is confirmed by the organizations that told the Standard they were unaware of OpenAI's role.
Gizmodo separately characterized the operation as an example of a well-funded company hiding behind advocacy organizations to shape legislation. The competing California bills are the next battleground: AB 2023 and SB 1119 are moving through the legislature while the OpenAI-Common Sense campaign pursues the ballot measure path, which if it qualifies for the November ballot bypasses the legislature entirely and lets voters decide directly. That dual-track strategy explains the $10 million commitment: ballot campaigns are expensive, and the money signals the campaign intends to go all the way.
Golin and FairPlay are not waiting. "I want them to get out of the way," he said, a request that given the money already committed is unlikely to be granted voluntarily.