Musk's Terafab targets 50x current advanced AI chip output.
Elon Musk stood in Austin on March 21 and pitched a hardware stack so large it barely fits in one sentence: build a new advanced chip fab in Texas, use those chips to power orbital data-center satellites, and scale to a million spacecraft over time. The key correction up front: this is not Space...

image from FLUX 2.0 Pro
Elon Musk stood in Austin on March 21 and pitched a hardware stack so large it barely fits in one sentence: build a new advanced chip fab in Texas, use those chips to power orbital data-center satellites, and scale to a million spacecraft over time.
The key correction up front: this is not SpaceX absorbing xAI in the way earlier draft language implied. What Musk described publicly is coordination across his companies. SpaceNews describes Terafab as an initiative by SpaceX, Tesla, and xAI — all run by Musk — not a disclosed corporate consolidation in this filing context. That distinction matters because ownership structure determines who funds what, who carries execution risk, and whose balance sheet takes the hit if schedules slip.
On the manufacturing side, the presentation claims are huge and should be treated as claims until there is plant-level evidence. Musk said Terafab is meant to produce one terawatt of processors annually, which he framed as about 50 times the current combined output of chips used in advanced AI workloads. SpaceNews reports that the first step is an "Advanced Technology Fab" in Austin, with most output initially aimed at a space-optimized D3 chip.
What is sourced and concrete today is the scale of incumbent spending, not Terafab construction progress. SpaceNews notes that advanced fabs routinely cost tens of billions of dollars and points to TSMC's U.S. buildout: $65 billion for three Arizona fabs, plus a later announced additional $100 billion U.S. investment plan. That is the right comparison baseline: not whether Musk's target sounds big, but whether comparable capacity has ever been built quickly by teams without prior commercial fab operating history.
The orbital side is also clearer now than in the January filing, though still early-stage. SpaceX's FCC application seeks authority for up to one million satellites for orbital AI/data-center use and requests waivers from standard deployment milestones for this non-interference Ka-band system. At the Austin event, Musk showed an "AI Sat Mini" concept with about 100 kilowatts of onboard power and roughly 100 square meters of radiator area, according to SpaceNews.
Those numbers are physically interesting for one reason: thermal rejection and power delivery are usually where space-compute concepts die. Musk argued radiator concerns are overblown and pointed to SpaceX operating roughly 10,000 Starlink satellites as evidence of orbital thermal experience. That is directionally relevant, but Starlink communications payloads are not the same operating class as 100-kilowatt compute payloads. The burden of proof is still flight hardware, lifetime data, and servicing economics.
Musk's cost thesis is that orbital compute can hit parity with terrestrial data centers in two to three years as launch costs fall and space solar availability remains high. That is testable, but only once missing variables are disclosed: launch cadence assumptions, replacement rates, radiation derating penalties, on-orbit failure rates, and networking overhead for real workloads. Right now, those are not public in decision-grade detail.
So this is where the story sits: the filing is real, the hardware concept is real, the target scale is extraordinary, and the economics are still mostly asserted. If SpaceX starts showing concrete factory milestones and spacecraft qualification data, this moves from ambitious architecture to industrial execution story. Until then, treat it as a high-conviction plan with unusually high dependency on manufacturing speed, not as proven orbital cloud economics.

