The race to wire AI data centers with light instead of copper has a quiet frontrunner, and it is not the companies making headlines at OFC and GTC this spring.
Google has been running optical circuit switches across every one of its data centers since 2013, when the company deployed a system called Apollo that let it reconfigure entire network topologies without touching a cable. The current generation, Palomar, is widely deployed across all Google infrastructure and Google believes it is the largest OCS deployment in the world by a comfortable margin. The maximum power draw of the entire Palomar system is 108 watts. A comparable electrical packet switch handling the same traffic pulls roughly 3,000 watts, according to Data Center Dynamics.
The rest of the industry is just arriving.
At GTC 2026 in March, Nvidia chief Jensen Huang announced that co-packaged optics (CPO) would arrive in 2028 with the Feynman NVLink 8 CPO switch, built on TSMC's COUPE silicon photonics platform. TSMC has said COUPE enters volume production in 2026 and delivers a 5-10x improvement in power efficiency and 10-20x lower latency compared to conventional SerDes links, according to TrendForce. Nvidia's Spectrum SN6800 networking platform delivers 409.6 terabits per second of bandwidth across 512 ports operating at 800 gigabits per second, according to Nvidia's GTC blog. Hyperscalers currently account for 60 percent of Nvidia's demand, and Nvidia has disclosed at least $1 trillion in AI compute orders booked through 2027, Nvidia reported at GTC 2026.
The power math is driving urgency. An Ayar Labs and Wiwynn rack-scale reference design, announced in March, targets more than 1,024 GPUs in a single rack with an all-CPO interconnect, pulling 100 to 200 kilowatts per rack, The Register reported. Nvidia's comparable copper-based Vera Rubin Ultra system draws roughly 600 kilowatts per rack. Training clusters and inference farms are running into the same wall: electrical interconnects dissipate too much heat at the rack density AI workloads demand.
Lumentum, which makes the MEMS-based OCS hardware that Google originally licensed and then surpassed internally, said its OCS business cleared its first $10 million quarterly target three months ahead of schedule and now carries an order backlog exceeding $400 million spread across multiple customers, Fierce Network reported. Cignal AI has forecast the optical circuit switch market will reach at least $2.5 billion by 2029. The Open Compute Project launched an OCS subproject in April co-led by Lumentum and iPronics, with Google, Microsoft, Nvidia, and Coherent as initial participants, OCP announced.
The technical appeal is straightforward. OCS switches traffic optically, without converting to electrons and back, which eliminates the optical-electrical-optical conversion latency and the power that conversion consumes. Google's Jupiter network, which uses Palomar OCS as a reconfiguration layer, scales to more than 6 petabits per second and reduces power consumption by 40 percent, cost by 30 percent, and improves throughput by 30 percent, while delivering a 50x improvement in reliability compared to electrical switching, according to Google's own data on the Jupiter evolution. The OCS layer is also rate-agnostic: it does not need to be replaced when the underlying electrical infrastructure moves from 40 gigabit per second to 100 gigabit or 400 gigabit links.
Ayar Labs closed a $500 million Series E funding round in March specifically to accelerate mass production of CPO chiplets, The Register reported. Its SuperNova laser modules are user-serviceable and front-mounted, meaning replacement does not require shutting down the rack.
The energy constraint is not theoretical. OpenAI topped $25 billion in annualized revenue run rate as of early 2026, The Information reported. Anthropic narrowed the gap to approximately $19 billion annualized, putting both companies deep in infrastructure-scale deployment. The GPU count required to serve that inference load is not growing linearly with revenue. At some rack density, copper runs out of road.
Google's head start matters for reasons beyond bragging rights. The company spent years debugging OCS reliability at scale before the term entered the industry lexicon. Its network is wavelength-agnostic and topology-flexible in ways that require deep integration with the control plane, and that integration work does not appear in any product announcement. When TSMC's COUPE enters volume production and Nvidia's CPO switches arrive in 2028, the industry will be catching up to a system that has been quietly running Google's production network for 13 years.
The question is not whether the optical transition will happen. Hyperscalers have too much power budget pinned by electrical interconnects to ignore it. The question is how many years the gap persists between what Google shipped in 2013 and what the rest of the industry manages to deploy before the power bill makes the decision for them.