Google Says 75% of Its New Code Is AI-Generated. Its Own Engineers Prefer Anthropic.
The 75pct AI code figure masks a structural problem: Google ties AI tool usage to performance reviews, pressuring engineers to adopt tools that drive the metric — not necessarily the best code.

Google reports 75% of its new code is AI-generated, up from 25% just eight months ago, with AI usage now tied to employee performance reviews. However, the metric measures tool engagement rather than actual productivity gains, and DeepMind engineers have privately preferred Anthropic's Claude Code over Google's own Gemini for sensitive work. The structural risk is that AI generates code faster than human engineers can properly audit it, creating a potential misalignment between the performance metric and actual code quality assurance.
- •Google's 75% AI-generated code metric measures tool engagement, not necessarily productivity improvement or code quality, since the bar for 'weekly usage' includes anyone who opened a tool once.
- •Performance review incentives tied to AI usage may be artificially inflating adoption rates faster than the quality assurance process can verify the output.
- •DeepMind engineers strongly preferred Anthropic's Claude Code over Google's own Gemini for sensitive work, creating internal tensions when Google proposed removing the competitor's tool.





