# Sophia Trains Faster But Gets Nowhere Farther: What Stanford FLeX Code Research Found - slug: sophia-trains-faster-but-gets-nowhere-farther-what-stanford-flex-code-research-found - date: 2026-04-09 - category: Artificial Intelligence Sophia optimizer trains code models 30% faster than AdamW — but new Stanford research shows that speed advantage comes at no accuracy cost, which is either good news or a red flag depending on what you thought the speedup meant. ---