The Linux kernel has spent decades teaching humans how to file bug reports. Now it is formally teaching AI agents.
Linux 7.0-rc7 includes updated documentation for the kernel's security-bugs reporting process. The changes do not describe a new feature or a patch review tool. They describe what a good AI-generated bug report looks like: what information to include, what format to use, what the security team needs to act on a report. This is the kernel's first documentation standard written with AI agents as intended participants, not noise to be filtered.
The timing is not accidental. Greg Kroah-Hartman, who maintains the kernel's char/misc subsystem, told The Register in March that something shifted about a month prior. "The world switched," he said. Reports that had been unusable AI-generated noise became genuinely useful. Kroah-Hartman tested the technology directly: gave it a codebase, asked it to find problems, received 60 results. Roughly one-third were wrong but pointed at real underlying issues. The remaining two-thirds produced correct patches, though each required human review and cleanup before submission.
The contrast with other open source projects is instructive. Daniel Stenberg, founder of the cURL project, ended the cURL bug bounty program on January 31, 2026 because AI-generated submissions had made it unworkable. The bounty attracted low-quality AI reports faster than the team could process them. cURL shut down the program; the kernel formalized the format. The difference is not that one project has better AI and the other has worse. It is that the kernel has the maintainer bandwidth and process infrastructure to absorb AI as a contributor, while cURL did not.
The more concrete evidence for the kernel's approach comes from Sashiko, an agentic code review tool developed by Roman Gushchin at Google, written in Rust, and donated to the Linux Foundation. Sashiko has run over 16,500 reviews on the Linux Kernel Mailing List (LKML) and was able to identify bugs in just over half of an unfiltered set of the last 1,000 upstream commits with Fixes: tags. The tool supports Gemini 3.1 Pro and Anthropic's Claude, and Gushchin has said it could theoretically work with any large language model. The 53.6% detection rate is not the point. The point is that the tool is running at scale on a live open source project and its output is entering the review process.
Chris Mason at Meta had pioneered AI-based review workflows for the kernel's eBPF and networking code before Sashiko existed, which gave Kroah-Hartman a reference point for what production-scale AI review looked like.
The documentation update formalizes what was already happening informally. The kernel security team has long spent significant time requesting patch proposals from reporters and asking for submissions in the correct format. The new language in security-bugs.rst specifies required and desirable contents for security reports, reducing the back-and-forth between reporters and maintainers. This is not a solution to AI-generated quality variance. It is plumbing: infrastructure for managing the relationship between a human-reviewed project and an AI contributor at scale.
Whether this represents a durable shift or an accommodation to current conditions depends on whether the underlying quality problem resolves. Sashiko's detection rate means nearly half of real bugs still slip through. The two-thirds of correct patches from Kroah-Hartman's test still needed human cleanup. The documentation does not fix either problem. What it does is establish that the kernel intends to keep receiving these reports and has written down how reporters should format them. That is a workflow decision, not a quality decision. The quality question remains open.
What the documentation update signals is that AI is no longer an edge case in kernel reporting. It is a known participant with a defined format. The co-develop tag for AI-generated patches already existed in the kernel's tooling. The docs now explain what those patches should contain. That is the dependency graph becoming visible: the kernel built the hook before writing the manual.
The Linux 7.0-rc7 documentation change is modest in scope. But in the ongoing, messy process of figuring out what AI-generated open source contributions actually are, a formal documentation standard from the kernel is data. The kernel is not saying AI reports are good. It is saying they are regular enough to warrant a format.