Your employer decides whether you become AI fluent, not you
The cleanest version of the "AI class war" thesis is also the least precise one.

image from Gemini Imagen 4
The cleanest version of the "AI class war" thesis is also the least precise one. The labor split opening up in the United States is not simply between workers who use AI and workers who do not. It is increasingly between workers whose employers give them AI tools, training, and permission to redesign work, and everyone else. That is a harsher story, and a more useful one for anyone building in enterprise AI.
Axios first framed the question as a coming conflict over AI fluency. The strongest evidence behind that framing comes from Ipsos and Google, which surveyed U.S. workers. In that poll, 40 percent of employees said they already use AI at work, but only 5 percent qualified as "AI Fluent" under the study definition. The eye-catching number is not just the gap. It is what seems to create it: 65 percent of workers said they wanted formal AI training, while only 14 percent said their employer had offered it. Workers who got both AI tools and guidance were 4.5 times more likely to become AI fluent than those who got neither, according to the poll.
That shifts the argument from abstract literacy to institutional capacity. The divide is not only whether an individual worker is curious enough to open ChatGPT on a lunch break. It is whether a manager, school, or company is willing to rebuild tasks around AI assistance. In other words, fluency is being manufactured inside organizations before it is evenly distributed across society.
The market is already putting a price on that. PwC, the professional services firm, said in its AI Jobs Barometer that workers with advanced AI skills received a 56 percent wage premium in 2025, while industries more exposed to AI saw faster growth in revenue per employee. PwC also said demand for formal degrees was declining faster in AI-exposed jobs than in less exposed ones, which suggests AI could loosen some credential barriers. But that is where the raised eyebrow belongs: when a consulting firm tells you AI is making workers more valuable, remember the firm also sells AI transformation. The directional point still matters. Employers appear willing to pay for people who can actually use these systems well.
The catch is that adoption is still uneven in ways that map onto older inequalities. The National Bureau of Economic Research wrote in a December digest that 28 percent of employed respondents used generative AI for work in August 2024, with higher usage among younger workers and those with college degrees. The underlying NBER working paper by Alexander Bick, Adam Blandin, and David J. Deming makes the same point in more detail: adoption has been fast by historical standards, but it is not spread evenly across occupations, ages, or education levels. This is not yet electricity. It looks more like an occupational advantage that diffuses through some workflows much faster than others.
Usage data from model providers points in the same direction. Anthropic, the AI company behind Claude, said in its Economic Index that AI usage was concentrated in software development, technical writing, and other knowledge-work tasks rather than spread uniformly across the labor market. Anthropic January 2026 follow-up report added another uncomfortable detail: usage remains strongly correlated with GDP per capita globally, and in the United States it is higher in states with larger shares of computer and math workers. The tasks showing up in Anthropic data also skew toward work that already requires more education than the broader economy. So yes, AI may become a general-purpose tool. Right now it still behaves a lot like a force multiplier for people who were already closer to the keyboards.
That matters because the United States is entering the AI era with a labor market that was already sorting people too aggressively by pedigree. Brookings Institution argued in a recent piece on skills-first hiring that AI could either reinforce or soften the country existing "paper ceiling" for workers who are skilled through alternative routes rather than four-year degrees. Brookings cited Opportunity@Work estimates showing that nearly 7.5 million upward-mobility jobs became less accessible to those workers between 2000 and 2019. If AI tools are deployed mainly inside firms that already invest heavily in high-skill knowledge workers, the result will not look like democratic empowerment. It will look like the same old class sorting, sped up.
That is the real signal hiding under Axios headline. AI fluency is becoming economically meaningful, but it is not emerging as a purely personal trait, like grit or ambition. It is being allocated through company budgets, management decisions, and training pipelines. Builders selling AI into enterprises should pay close attention to that distinction. The winners may not be the companies with the most dazzling model demos. They may be the ones that can help ordinary workers, supervisors, and nontechnical teams change how work is actually done.
What to watch next is whether employers treat AI fluency as a broad workforce capability or a premium skill for a narrow professional class. If the Google/Ipsos numbers move because training spreads, this starts to look like a productivity transition. If the wage premium widens while access stays lopsided, then the class-war language will stop sounding like media drama and start sounding, annoyingly, earned.

