Silicon Valley has a new old idea: make AI argue. The Catholic Church invented the mechanism centuries ago: the Promoter of the Faith, informally the Devil's Advocate, whose sole job was to argue against making someone a saint. Edward de Bono turned the same instinct into a business-management framework in 1985, calling it Six Thinking Hats. Today, two independent open-source projects have arrived at the same pattern as AI agent infrastructure, a setup where different AI personas are required to argue with each other. One project, rockcat/HATS, assigns six distinct AI personas — White Hat for facts, Black Hat for risks, Green Hat for alternatives, and three others — that argue with each other inside a shared meeting. A separate GitHub project released its own implementation in May 2025. The pattern has appeared before: Fabio Lalli published on the idea in June 2025, and the approach has roots in multi-agent LLM research going back years.
The implementations reveal that structured adversarial roles between AI agents are now buildable by a single developer over a weekend with the right API keys. The pitch: large language models have a sycophancy problem. When multiple agents review the same code or decision, they tend to agree rather than disagree, producing confident wrong answers instead of catching errors. Black Hat is required to identify failure modes, even if every other agent thinks the design is sound. The constraint is the product.
HATS runs each hat as an animated 3D avatar in a browser window, speaking via text-to-speech, with a Kanban board tracking tasks across six workflow stages and connections to external tools through the Model Context Protocol: a standardized way for AI agents to share tools across different model providers. Five meeting types are built in: standup, sprint planning, retrospective, review, and ad hoc. Token usage and cost are tracked per agent, and each agent can run on a different model: OpenAI, Claude, Gemini, or a local setup via Ollama or LM Studio. The rockcat/HATS README has the full feature list.
Whether the pattern actually reduces confident wrong answers is the open question. There's no published data on whether forcing a dissent round catches more errors than letting agents vote. The de Bono estate has not weighed in on commercial AI implementations. Whether anyone is using either open-source project remains genuinely unclear. GitHub's rate-limiting prevented a star or fork count at press time, no customers are announced, and no venture backing is visible in either repository.
The adoption question is the ballgame. If nobody is using this, there is no story: just two well-documented demos converging on the same idea. If someone is, the question is who and at what scale. What to watch: whether any major AI lab quietly ships a structured-dissent mode inside an existing product, the kind of feature that ships without a press release because the marketing team does not know how to explain it. Also whether the de Bono estate moves to protect the trademarked framework as implementations multiply. The Catholic Church concluded the formal Devil's Advocate role was slowing canonizations too much and substantially reduced it in 1983. Whether that tells us anything about how AI companies handle critical feedback is left as an exercise for the reader.