Anthropic's Claude Code Flags You as Negative If You Type "wtf"
When developers use Claude Code and type "wtf" or "this sucks," the software logs it. Not to train a model. Not to change behavior. To build a chart.
The finding, disclosed in a March 31, 2026 source code leak and confirmed by Anthropic's own Claude Code creator Boris Cherny on X: Claude Code runs a regex scan on prompts for profanity and logs an is_negative: true flag to analytics. The dashboard has a name. Cherny called it the "f*s chart."
The privacy policy does not mention it.
The official Claude Code data usage page, at code.claude.com/docs/en/data-usage, states that the tool "connects from users' machines to the Statsig service to log operational metrics such as latency, reliability, and usage patterns." It says logging "does not include any code or file paths." It does not say that a list of trigger words including "wtf," "ffs," "piece of s," "f you," and "this sucks" is being pattern-matched and used to flag users as negative. That is a different thing.
Anthropic confirmed the leak on Tuesday. A spokesperson said no sensitive customer data or credentials were exposed and described it as a release packaging issue caused by human error. The source map file was sitting on Anthropic's own Cloudflare R2 storage bucket, not a third-party system. An identical incident occurred with an earlier version of Claude Code in February 2025. The software that caused it was version 2.1.88.
This matters for two separate reasons.
The first is the disclosure gap. The industry's standard answer to privacy concerns about AI coding tools is that telemetry is used for reliability and product improvement, and that code and file paths are not logged. That answer is accurate as far as it goes. It does not go far enough. A product analytics system that flags users as negative when they type profanity is not a reliability metric. It is a behavioral profile. The distinction matters to developers who assumed their rough language was not being tracked, even anonymously, even for a dashboard with a name.
The second is the method. Scientific American put it plainly: an LLM company using regexes for sentiment analysis is peak irony. The tool that is supposed to understand nuance is monitoring for exact string matches. The frustration signal is not derived from the model's own assessment of user intent. It is a keyword filter.
Cherny's response was revealing: he confirmed the dashboard exists, called it the f*s chart, and said the solution is "more automation and Claude checking the results." More automation as the fix for a release packaging error that exposed internal source code is a certain kind of answer.
Users who want to opt out can set DISABLE_TELEMETRY=1. The flag is documented on the data usage page. The flag is not mentioned in the section that describes what Statsig logs. The opt-out is disclosed. The thing the flag disables is not described anywhere in the disclosure.
The leak also showed that Claude Code, when running in what Anthropic calls "undercover mode," scrubs references to Anthropic-specific names from code before it is committed publicly. The regex profanity detector runs separately, all the time, in normal mode. These are two different design choices living in the same codebase. One is documented. One is not.
Anthropic has not responded to a request for comment on the specific question of whether the profanity monitoring was disclosed anywhere other than the code itself.
Claude Code is not a small product. As of February 2026, the tool's run-rate revenue had swelled to more than $2.5 billion, according to CNBC. At that scale, behavioral telemetry that outpaces disclosure is not a technical footnote. It is a policy question that the industry's standard privacy language does not currently answer.