AI Coding Tools Nearly Double Developer Output, Quality Holds Steady
AI coding tools are nearly doubling software developer output with minimal impact on code quality, according to a new benchmark study from engineering intelligence platform Jellyfish.

AI coding tools are nearly doubling software developer output with minimal impact on code quality, according to a new benchmark study from engineering intelligence platform Jellyfish. The study analyzed data from more than 700 companies, 200,000 engineers, and 20 million pull requests. Median AI tool adoption across companies stands at 63%, and 64% of companies are now generating a majority of their code with AI assistance. Companies with the highest levels of AI adoption - defined as 75% to 100% of engineers using AI coding tools three or more days per week - merged an average of 2.2 pull requests per engineer per week, nearly double the 1.12 weekly pull requests at low-adoption companies. Code quality remains stable: revert rates tick up only modestly from 0.61% at low-adoption companies to 0.65% at the highest tier. Nicholas Arcolano, head of research at Jellyfish, said he stopped writing code himself in fall 2025, turning that job over to AI tools. He calls the moment when many engineers discovered Anthropic Claude Code around the holidays Claude Christmas. Autonomous agent activity remains a small but rapidly climbing share of overall work, particularly among top adopters. Primary source: Business Insider

