According to a Bloomberg report on Tuesday, OpenAI Chief Operating Officer Brad Lightcap said the company faces potential bottlenecks in expanding its artificial intelligence infrastructure from two directions simultaneously: a shortage of high-bandwidth memory chips needed to run large AI models, and constraints on energy supply for the data centers that would house them.
The warning from Lightcap landed the same week that Micron reported its HBM memory supply is fully allocated through 2026, with demand for HBM4 outstripping global supply. The memory market dynamics are structurally unfavorable for AI builders without captive memory supply: SK Hynix and Samsung control the bulk of HBM production, and both are prioritizing allocations toward their own vertically integrated AI systems. Micron is racing to expand capacity but faces the technical difficulty of HBM manufacturing, which has fundamentally altered the supply-demand balance in ways that go beyond the typical memory cycle boom-and-bust.
The energy side of the constraint is underappreciated. Data centers running AI workloads have power requirements that dwarf traditional compute workloads, and the United States grid in many regions is not currently built for the rapid cluster expansion that AI companies need. The combination means that even if a company can acquire enough HBM chips to build out its infrastructure, it may not be able to site or power the facilities to use them.
Who is most exposed to this? The companies without captive memory production or long-term supply agreements are most at risk. OpenAI has described preliminary arrangements described by Reuters as letters of intent with Samsung and SK Hynix for the Stargate project, seeking to secure substantial memory supply through the decade. OpenAI is also building custom silicon with Broadcom, targeted for mass production in 2026 according to unnamed sources cited by the Financial Times, which is partly a response to this constraint: designing your own chip is one way to lock in supply relationships. Samsung has separately secured a deal to supply up to 800 million gigabits of 12-layer HBM4 memory chips to OpenAI in the second half of 2026, according to unnamed Korean Economic Daily sources, which could not be independently verified. Microsoft, Google, and Amazon have their own chip programs and their own power procurement operations, which gives them more levers than a smaller AI lab.
The irony is that the companies most visible in the AI race are also the ones most exposed to infrastructure bottlenecks they cannot immediately solve. The HBM shortage is not a problem that can be engineered around in 12 months; new memory fab capacity takes years. Energy infrastructure upgrades face permitting, grid, and land use constraints that are fundamentally political. Lightcap warning about these constraints is notable precisely because he is not known for public pessimism about AI scaling.