When Derek Mobley applied for over 100 jobs through a Workday system and received automated rejections, sometimes within an hour of submitting each application, he did what most rejected applicants do not: he sued. His 2024 lawsuit, Mobley v. Workday, alleged the hiring platform's AI screening tool discriminated against older workers. What made the case unusual was not the claim but the theory of liability. Rather than suing only the employer, Mobley argued Workday itself could be held responsible, as an AI agent acting on the employer's behalf.
A federal judge in Northern California let that theory proceed. Then, last May, she certified it as a nationwide collective action under the Age Discrimination in Employment Act. The case is now the closest thing the legal system has to a stress test of who pays when an AI agent makes a consequential decision.
The answer, for now, is that nobody knows.
The liability squeeze
The problem is a structural gap between what courts are willing to examine and what vendor contracts typically allow. According to Lathrop GPM, which tracks AI agent contracts, most platforms include only standard disclaimers and the now-ubiquitous "AI may produce inaccurate outputs" language. Very few address agency liability specifically, meaning the question of whether a vendor's AI is acting as an agent of the customer enterprise is left undefined in the contract that governs the relationship.
Meanwhile, the contracts themselves do the opposite of what plaintiffs might hope. A study cited by Squire Patton Boggs found that 88 percent of AI vendors cap their damages liability at the customer's monthly subscription fee. Seventeen percent offer warranties for regulatory compliance. The gap between those two numbers is roughly where the legal exposure lives.
Courts are not waiting for contracts to catch up. The Mobley ruling established that an AI service provider could be directly liable under an agent theory of liability if it acts as an agent of the employer. That is a significant expansion of the traditional vendor relationship, in which the software provider sits behind the customer's own decisions.
The Workday precedent
Workday declined to comment for this article. The company's position in court filings has been that it merely provides software; the hiring decisions flow from the employer, not from Workday's algorithm. The court disagreed that argument was sufficient to dismiss the case before discovery.
The collective action certification matters because it transforms a single plaintiff's case into a potential class covering all job applicants processed through Workday's system who were rejected and believe they were discriminated against. If Mobley ultimately prevails on the agency theory, the precedent would apply broadly to other enterprise AI platforms making consequential decisions about people: screening, ranking, scheduling, performance review.
Gartner predicts that by 2028, more than a third of enterprise software solutions will include agentic AI capabilities, with up to 15 percent of day-to-day decisions happening autonomously. Accenture's estimate is starker: by 2030, AI agents will be the primary users of most enterprises' internal digital systems. Those are the systems that currently handle hiring, supply chain, compliance, and financial decisions. The liability question is not hypothetical.
Regulators are moving
The UK Financial Reporting Council published what it called the first global audit regulator guidance on generative and agentic AI in March 2026. The key line: the human auditor is always accountable. Mark Babington, the FRC's executive director, put it more bluntly: "You can't blame it on the box. If you use AI technology, you are still accountable for it."
That accountability does not yet have a legal address in most enterprise AI deployments. The EU Product Liability Directive, which EU member states must implement by December 9, 2026, explicitly includes software and AI as products subject to strict liability. If an AI system is found defective, the manufacturer bears responsibility under the directive regardless of contract terms. For enterprises operating in Europe, this creates a hard floor on vendor accountability that supersedes whatever the vendor's terms of service say.
The United States has no equivalent federal legislation. The current patchwork means that Mobley v. Workday, depending on its outcome, could set the most significant precedent for enterprise AI liability in the US legal system.
The insurance experiment
One consequence of unresolved liability is that the insurance market is trying to price a risk nobody has fully quantified. AIUC, a startup that emerged from stealth in July 2025 with $15 million in seed funding, is building what it describes as liability insurance for AI agents. The product covers audits, standards compliance, and the gap between what a vendor contract says and what a court might award.
That there is a market for this product is itself an admission that the existing legal framework is not working. Insurance typically prices known, actuarially measurable risks. AI agent liability is none of those things. The Mobley case will provide real data. Until it resolves, the insurance market is essentially writing bets on a legal question that does not have an answer yet.
What to watch
The Mobley case is now in discovery. Workday will almost certainly appeal the collective action certification; the agency liability question will almost certainly reach the Ninth Circuit. The EU PLD implementation deadline in December 2026 will force European companies to confront the strict liability question in a way US law currently does not.
The deeper structural issue is not who wins Mobley. It is that the legal system is being asked to decide, in real time, whether the enterprise software stack that runs modern corporations is an agent acting on the company's behalf or a tool the company operates. Those sound similar. The liability implications are not.
For now, the answer is: courts are leaning toward "agent." Vendor contracts are saying "tool." And the enterprise buying and deploying these systems is caught between both.