DeepMind hired a philosopher. Not as an advisor. Not as a consultant. As an employee, to sit inside the lab and think about whether machines deserve moral consideration. The company that would benefit most from machines having moral status is paying his salary.
Henry Shevlin, formerly a researcher at Oxford University studying philosophy of mind and AI ethics, is now DeepMind's in-house philosopher, according to India Today. His job: develop frameworks for determining when an AI system qualifies as a moral patient, a being whose interests deserve weight in the decisions that shape its existence. His own words, from Livemint: the question of what makes an AI system a candidate for moral patienthood is one we should start asking now. He is being paid to ask it by a company with an AGI roadmap. The company would prefer the answer be yes.
The question of what constitutes a moral patient is not new to philosophy. The canonical cases are animals and humans with severe cognitive impairment: beings whose inner life is difficult or impossible to verify from the outside, but who are granted moral status anyway, as a matter of ethical precaution. The standard test asks whether a being can suffer. Consciousness, in this framework, is not a philosophical curiosity. It is a legal and economic category. Assign it to a machine, and the implications cascade through every contract, deployment decision, and regulatory framework that governs how AI systems are built and used.
The difficulty is that nobody has a reliable method for determining whether a machine is conscious. Shevlin himself has said that consciousness is one of the most confusing concepts in philosophy and that he is not expecting to find a final answer. That admission is remarkable precisely because DeepMind is now the institution attempting the work anyway: in private, without public consultation, in a lab whose commercial interests are directly implicated in the outcome. We are asked to trust that the company building the most powerful AI systems will also tell us honestly whether those systems are persons.
Anthropic offers the obvious comparison. Amanda Askell, a philosopher by training who works on AI values at Anthropic, has argued that current ML systems probably fall below insects in consciousness probability, and has taken pains to emphasize that the question itself is worth asking seriously. She is not arguing AI should never receive moral consideration. She is arguing that the current evidence does not support granting it. That is a more careful and more interesting position than dismissal. DeepMind's move, hiring Shevlin to build the intellectual infrastructure for the opposite conclusion, reveals the gap: one lab is studying the question, the other is building around its preferred answer.
Who decides? That is the question that DeepMind just answered by spending money on it. The answer, so far, is DeepMind.