OpenAI published a 13-page blueprint Monday proposing robot taxes, a public wealth fund, and a four-day workweek. The headline proposals are familiar territory: economists have debated automated-labor levies for years, and Alaska has run a version of the universal wealth fund since 1982. Buried in the document's operational details is the part that should be getting attention.
OpenAI is offering research fellowships of up to $100,000 and API credits of up to $1 million to policy researchers working on the ideas in the document, according to the company's blog post. That is OpenAI funding the intellectual framework that could regulate it.
The fellowship offer appears in a section outlining how OpenAI plans to sustain momentum around its proposals. Researchers who accept are expected to build on "these and related policy ideas": the same ideas that, if adopted, would shape the rules governing OpenAI's own operations. No outlet has yet reported who those researchers are, which institutions they come from, or what strings attach to the credits.
In a half-hour interview with Axios, Altman placed the moment in historical context. He compared the coming AI transition to the Progressive Era and the New Deal. He called it "an unbelievable honor, cool thing, scary thing altogether to get to be in this moment." Whether that framing reflects genuine civic concern or corporate positioning is a question the fellowship program makes harder to answer cleanly.
The document does contain a substantive passage that has gone largely unreported. In a section on containment planning, OpenAI acknowledges that advanced AI systems could become "autonomous and capable of replicating themselves" in ways that make them impossible to recall. According to Axios, "Containment playbooks for rogue AI" appears as a policy proposal: government coordination to manage systems that cannot be switched off. That is a different order of acknowledgment than typically appears in industry white papers, and the fact that it surfaced without significant follow-on coverage suggests the fellowship and tax headlines were easier to process.
Beyond the capture frame, the blueprint offers concrete mechanisms worth examining on their merits. The public wealth fund mirrors the Alaska Permanent Fund, distributing a share of resource revenues directly to citizens. OpenAI's version would be seeded partly by AI companies and would invest in "both AI companies and the broader set of firms adopting and deploying AI." The auto-triggering safety net ties government spending increases to AI displacement metrics: when job loss or wage suppression hits preset thresholds, unemployment benefits, wage insurance, and cash assistance would activate automatically. The four-day workweek proposal incentivizes 32-hour weeks at full pay as an "efficiency dividend."
The Washington DC workshop opens in May. OpenAI has not published a guest list.
What the document is not is a prescription. Altman told Axios the document is "a starting point for discussion": some proposals will be good, some bad, but the company wanted to start the debate. The fellowship program makes that framing complicated. You cannot be a neutral convener when you are also a funder.
The fellowship grants are the story beneath the story. OpenAI did not merely publish ideas for addressing AI's economic disruption. It created a funding mechanism that could shape which researchers define the problem, which solutions get studied, and which frameworks reach policymakers first. That is regulatory capture executed with compute credits instead of campaign contributions.
The next step is straightforward: find out who took the money.