OpenAI published a 13-page policy document on April 6th. The title is "Industrial Policy for the Intelligence Age: Ideas to Keep People First." The ideas inside are specific enough to be taken seriously and politically convenient enough to invite suspicion. TechCrunch
The document proposes a robot tax: automated systems taxed at rates comparable to the human workers they replace. It proposes a public wealth fund modeled on the Alaska Permanent Fund, where every citizen gets a financial stake in AI-driven growth with returns distributed directly. It proposes a 32-hour workweek framed as an efficiency dividend from AI productivity gains. And it proposes something that nobody else in the AI industry has put on paper: automatic safety net triggers. When AI-driven displacement metrics hit preset thresholds, unemployment payments and wage insurance increase automatically, then phase out when conditions stabilize.
That last item is the most significant. It is OpenAI quietly admitting that AI-driven job displacement is a near-term certainty, not a theoretical risk, and that the company is serious enough about the problem to require policy-level pre-commitment before the disruption arrives.
Sam Altman told Axios in an exclusive interview that the scale of change coming from AI is comparable to the Progressive Era and the New Deal. He separately said "a major cyberattack enabled by near-future AI is totally possible within the next year," according to Axios reporting, and that AI models being used to create novel pathogens is no longer theoretical. The $852 billion company, which closed a $110 billion private funding round and restructured to for-profit in 2025 with a public-benefit mandate and a nonprofit equity stake, is not framing this as a distant hypothetical.
The robot tax idea is not new. Bill Gates proposed it in 2017. What has changed is that a company with OpenAI's scale and access to Washington is putting specific legislative meat on the bones. The tax would shift the burden of automation from payroll to capital. For industries where the unit economics of automation are already favorable, a robot tax could slow adoption. For industries where the transition is already underway, it could create a transition fund. The document does not specify rates or thresholds, but the direction is clear.
The public wealth fund is modeled on the Alaska Permanent Fund, which distributes a portion of oil revenues to every state resident annually. OpenAI's version would invest in long-term assets tied to the AI economy, creating what the document calls a national dividend from automation. If the fund captured even a fraction of the value that flows through AI infrastructure over the next decade, the annual dividend per citizen could be substantial. It could also become the most politically durable argument for AI adoption that the industry has ever manufactured: vote against AI, you vote against your dividend.
The auto-triggering safety nets are the most technically specific proposal in the document. They require defining and measuring AI-driven displacement in real time, which does not currently exist. The Bureau of Labor Statistics does not have a category for "jobs displaced by AI." The Federal Reserve does not track automation-specific unemployment. Building the measurement infrastructure would take years. The triggers themselves would require legislative authority that does not currently exist. The document acknowledges this implicitly by framing everything as "ideas" rather than a policy platform with a path to enactment.
The document also says things that are uncomfortable for OpenAI's position in the current regulatory environment. The for-profit restructuring in 2025, with a public-benefit mandate and a nonprofit retaining an equity stake, is a governance structure that has attracted scrutiny from attorneys general and securities regulators. The wealth fund proposal, if it gained traction, would create a recurring political dividend that could be weaponized against OpenAI's competitors in the semiconductor and cloud infrastructure space. A tax on AI systems that uses the same measurement infrastructure as the displacement triggers would disadvantage smaller players who lack the compute scale to spread the cost.
The 32-hour workweek proposal is the lightest of the four. Incentivizing employers to pilot 32-hour weeks without reducing pay, tied to AI productivity gains, is a political offer that costs OpenAI nothing and gives organized labor something to take to the next contract negotiation. It is also, notably, the only one of the four proposals that does not require defining or measuring what AI is actually doing to employment.
OpenAI framed the document as an intervention in the national conversation rather than a legislative platform. That framing is deliberate. A company that raised $110 billion cannot easily advocate for specific taxes without triggering a political reaction from the companies it is asking to pay them. The document's language keeps every proposal at the level of principle rather than statute, which allows OpenAI to claim credit for addressing displacement while avoiding accountability for the specific mechanisms that would make any of it work.
What is genuinely new is the auto-trigger framing. Nobody else in the industry has put a name on the specific problem of AI-driven displacement and proposed a policy mechanism that activates before the harm occurs rather than after. Whether that mechanism is implementable, measurable, or politically durable is a separate question. The fact that OpenAI published it at all, from a $852 billion company that just closed the largest private funding round in the sector's history, is the actual story.