There's a problem emerging in persistent AI agents that hasn't gotten a name until now: "zombie memories."
That's the term researchers at arXiv use in a paper published March 18 to describe a concrete failure mode in large language model agents. An agent stores information from past interactions in its context window. Over time, some of that information becomes stale or outright contradictory — an old preference, a superseded fact, a mistaken summary of a prior conversation. The agent doesn't know it's wrong. It acts on the bad memory. The context window is contaminated, but there's no mechanism to catch it.
The paper introduces MemArchitect, a governance layer for agent memory. Rather than treating memory as passive storage — the standard RAG framework approach — MemArchitect decouples memory lifecycle management from model weights and enforces explicit, rule-based policies. That includes memory decay (older information degrades automatically unless reinforced), conflict resolution (contradictory memories are flagged or resolved), and privacy controls (what the agent can and cannot remember about a given interaction).
The authors — Lingavasan Suresh Kumar, Yang Ba, and Rong Pan — demonstrate that governed memory consistently outperforms unmanaged memory in agentic settings, though that claim is from an unverified preprint. They argue structured memory governance is necessary for reliable and safe autonomous systems.
The zombie memory problem is intuitive once named. Anyone who's had a conversation with an LLM that clearly built on a false premise it picked up earlier has experienced a version of it. In a simple chat, it's a nuisance. In a persistent agent that's making real decisions — approving access, executing code, signing off on actions — a zombie memory isn't a bug in the interface; it's a failure mode in the governance structure.
The paper's framing is distinct from the permission and access governance stories that dominated earlier this week. Those were about what agents are allowed to do. MemArchitect is about what agents are allowed to remember — and what they're allowed to forget. Both are governance gaps. Both are getting academic attention now.
The paper is at arXiv.