DeepMind just hired a Cambridge philosopher to investigate whether AI systems can ever be conscious. The job posting went up last week. The candidate starts in May. Lerchner's paper on the topic, posted to PhilArchive in March, spread to 800,000 impressions and 12,000 likes on X. What makes the hire unusual is not the title — philosophers have worked at AI labs before — but what a DeepMind researcher published a few weeks earlier.
Alexander Lerchner, a Senior Staff Scientist at Google DeepMind, released a paper on March 10, 2026 titled The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness. Its core argument: symbolic computation cannot produce consciousness because computation itself requires a conscious interpreter to assign meaning to physical states. The transistors switching inside a GPU are not computing anything on their own. They become computation only when a conscious observer reads them as symbols. If that is true, then computation cannot produce consciousness — it already presupposes it.
Lerchner calls this the mapmaker problem. The standard view in AI research holds that physics produces computation, and sufficiently complex computation produces consciousness. Lerchner inverts the chain: physics produces consciousness, and consciousness produces computation. The abstract causal structures that functionalists treat as substrate-independent are actually mapmaker-dependent descriptions — properties of our descriptions of physical processes, not the processes themselves.
The paper uses a gravity formula as its key analogy. A correct mathematical description of gravitational attraction does not itself exert weight. In the same way, a correct algorithmic description of an emotional state does not itself instantiate the feeling. The description is the map. The territory is something else.
Lerchner distinguishes simulation from instantiation. Current AI systems simulate consciousness through behavioral mimicry driven by what he calls vehicle causality: the physical states of the hardware cause the next physical states, trained to produce outputs that match human patterns. But the symbolic level, the meaning, is imposed from outside by designers and users. Instantiation would require content causality: physical processes genuinely generating subjective states because the physics itself has the right properties — not because an observer assigned them significance. On this account, biological brains instantiate consciousness. Digital computers do not.
The paper explicitly states that if an artificial system were ever conscious, it would be because of its specific physical constitution, never its syntactic architecture. This is not a scale argument. Lerchner does not say consciousness is absent because current AI is too small or too poorly trained. He says the question is structurally closed for any symbolic system, at any scale.
DeepMind announced the hire of Henry Shevlin, previously Leverhulme Centre for the Future of Intelligence Associate Director at Cambridge, as a full-time in-house Philosopher. Shevlin will focus on machine consciousness, human-AI relationships, and AGI readiness, according to TheNews.com.pk. He announced the role on X: "I have been recruited by Google DeepMind for a new Philosopher position...starting in May."
Two bets, one lab. Lerchner's paper says the question DeepMind just hired someone to investigate full-time is structurally impossible for any symbolic system. Shevlin will work on it anyway.
The tension is not simply that Lerchner might be wrong and Shevlin right. It is that DeepMind appears to be running both positions simultaneously, without public acknowledgment of the contradiction. If Lerchner is correct, every AI safety team at every lab working on machine consciousness detection is addressing a scenario his own employer has published a structural argument cannot occur. If Lerchner is wrong, DeepMind has quietly funded serious philosophical work on a question that much of the industry treats as settled by meme.
Neither position is obviously wrong. One detailed breakdown notes that Lerchner is arguing close to John Searle's biological naturalism: consciousness requires specific causal powers that biological systems have and digital computers do not. But the institutional collision is the more immediate story — a company paying two smart people to do jobs that appear to logically preclude each other, and saying nothing publicly about the gap.
Shevlin starts in May. The paper has been online since March. The silence between them is the story.