The AI datacenter industry has an optics problem, and it is not the one everyone thinks it is.
Nvidia's latest Ethernet and InfiniBand switches, announced in March 2026, come with co-packaged optics — CPO, the practice of integrating optical engines directly onto the switch package — standard. The pitch: cut the electrical distance from ASIC to photonics from 15 to 30 centimeters down to a few millimeters, eliminate the front-panel transceiver, and collect the power savings. Nvidia claims a 3.5x improvement in power efficiency and a 10x improvement in link resiliency versus pluggable transceiver architectures. Those are numbers from a shipped product. The investment community noticed: Ayar Labs, a startup building chiplet-based optical I/O, closed a $500 million Series E at a $3.75 billion valuation in early March, with Nvidia, AMD, and MediaTek as strategic backers — total funding to date, $870 million. Nvidia separately committed $4 billion to Lumentum and Coherent, established suppliers of optical components, the same week.
The narrative is clean. Bandwidth density is the bottleneck in AI factories. Optics solve it. Deploy and scale.
The paper from Yasha Yi at the University of Wisconsin-Milwaukee and the MIT Microsystems Technology Laboratory, and Danny Wilkerson at Invictus Innovation EV Technology, posted to arXiv on March 22, argues that the industry is solving for the wrong variable. "Co-packaged optics should not be viewed as a component-level optimization," the authors write, "but as an architectural commitment that reshapes the boundaries between photonics, electronics, and system design." The numbers everyone cites — power per bit, bandwidth density — are downstream of the hard part: packaging, thermal coupling, and serviceability. Those are not component decisions. They are system decisions that constrain what you can build, how you build it, and who can afford to build it at scale.
The packaging problem is where the story gets expensive. In CPO, the photonic integrated circuit sits beside the switch ASIC inside the same package. The fiber attaches directly to the chip. This eliminates the electrical loss from routing signals across a PCB, but it creates a different kind of mechanical and thermal engineering problem. At the densities AI factories demand, a single two-inch-square chip can draw nearly 35,000 amps, consuming up to 35 kilowatts, according to Keysight EDA data cited at a chiplet design conference. The power delivery, the thermal management, and the structural integrity of a multi-die assembly are co-dependent. You cannot solve one without touching the others.
The gap between investment thesis and manufacturing reality is concrete. Global sales of lasers and photonic integrated circuits for optical transceivers are expected to grow from $2.4 billion in 2023 to $5.9 billion by 2029, driven largely by AI datacenter demand, according to LightCounting data cited by Siemens. McKinsey estimates that meeting global AI demand will require $5.2 trillion in data center investment by 2030. Those are investment-scale figures. The manufacturing reality is messier. Mordor Intelligence, a market research firm, estimated in early 2026 that several foundries were achieving sub-70 percent yields and that TSMC's COUPE program — the company's platform for optical chiplet integration — targets 60 to 65 percent yield at maturity, implying 35 to 40 percent scrap at full production. Those are market-research estimates from a single firm; I could not independently verify them from a primary source, TSMC material, or a credible independent technical outlet. But they track with what the packaging economics imply: packaging costs more than the optical engines themselves, according to SemiAnalysis. That is not a component market problem. That is a cost architecture problem.
Ring resonator heaters illustrate the thermal coupling challenge in miniature. Each ring — a circular photonic structure that modulates light by generating constructive interference — consumes approximately 1 to 10 milliwatts to maintain thermal stability. Current systems use around 64 rings. DARPA programs have discussed integrating thousands of rings in future designs, approaching transistor count densities of modern silicon chips. Lang Lin, a principal product manager at Synopsys, told Semiconductor Engineering that 1,000 rings at 1 milliwatt each would consume roughly 1 watt for thermal management alone — before accounting for the amplifiers, SerDes, and power delivery circuitry the rings need to talk to. At AI factory scale, that 1 watt per thousand rings is not negligible.
The serviceability question is less technical and more economic. Pluggable transceivers are field-replaceable: a technician pulls a failed module and swaps it. CPO integrates the optical engine onto the package substrate. When the photonic component fails, the switch assembly is effectively non-field-serviceable. Meta presented reliability data from a scale-out switch study at ECOC 2025 — a signal that at least one hyperscaler is running the numbers on operational cost over a five-year deployment horizon, not just peak bandwidth per rack.
Standardization is the other open question. Traditional optical interconnects benefit from multi-source agreements that define mechanical and electrical standards for transceivers, enabling interoperability across vendors. CPO is earlier in that process. The Open CPX MSA initiative aims to standardize CPO optical engines, but a customer cannot yet mix photonic engines from different vendors the way they can with pluggable QSFP-DD modules. Until standardization matures, procurement and supply chain risk sit with the hyperscaler, not distributed across a component ecosystem.
Nvidia's approach sidesteps some of this uncertainty by owning more of the stack. Its Spectrum-X Photonics platform integrates silicon photonics directly into the switch ASIC package — 409.6 terabits per second of bandwidth across 512 ports at 800 gigabits per second in the SN6800 chassis. Nvidia is not selling a component; it is selling a system. That is consistent with the paper's framing: the decision to use CPO reshapes what you are actually buying.
Microsoft Research is pursuing a different path. Its MOSAIC program, developed at the Cambridge lab with Azure hardware teams, uses MicroLEDs rather than lasers as the light source — hundreds of parallel low-speed channels through imaging fiber, the same multi-core cable used in medical endoscopy. Microsoft estimates roughly 50 percent less power draw than mainstream laser-based optical cables. The MicroLED approach avoids the laser supply chain constraints that Gartner analyst Naresh Singh said are expected to persist through 2027. Microsoft expects commercialization by late 2027. The caveat is bandwidth: MOSAIC's current sweet spot is 400G to 800G. By the time it deploys at scale, the industry may have moved to 1.6T or 3.2T targets, according to Counterpoint Research. Without buy-in from Nvidia or AMD on the chiplet integration side, the scalability question remains open.
The US data center power trajectory makes the stakes concrete. American data center power draw is projected to scale from 176 terawatt-hours in 2023 to 580 terawatt-hours by 2028 — more than three times in five years, per Mordor Intelligence. About 60 percent of data center energy is spent on data movement, not compute, according to Siemens. The optics are not a peripheral concern. They are load-bearing infrastructure for a grid that does not currently have the headroom to absorb the growth.