Starcloud hit unicorn status this week on a $170 million Series A — the fastest Y Combinator company ever to reach $1.1 billion, in just 17 months since demo day. SpaceNews reported the round was led by Benchmark and EQT Ventures, with angels including former Boeing CEO Dennis Muilenburg, former Starbucks CEO Kevin Johnson, and retired U.S. Air Force General Stephen Wilson. But the funding is less a milestone than a down payment on a bet that has not paid off yet.
The bet is Starship.
Starcloud has real hardware in orbit. Its 60-kilogram Starcloud-1 satellite, launched in November 2025 on a SpaceX Falcon 9 rideshare, carried an Nvidia H100 GPU — the first H100 to run in space. SpaceNews noted the chip ran continuously with zero restart failures. A companion Nvidia A6000 GPU on the same spacecraft failed during launch. Starcloud subsequently trained NanoGPT, a language model built by OpenAI founding member Andrej Karpathy, on Shakespeare's complete works in orbit — what the company claims is the first LLM trained entirely in space. CNBC reported the demonstration.
The progression from that 60-kilogram prototype to what Starcloud actually needs is stark. Its next satellite, Starcloud-2 at 450 kilograms, will generate 8 kilowatts of power — roughly 100 times what Starcloud-1 produces — and will carry the largest commercial deployable radiator ever sent to space. SpaceNews reported the satellite will host payloads from the Department of Defense and Earth observation customers, and that revenue from those hosted payloads will for the first time exceed the cost to design, build, and launch the spacecraft. Crusoe, an AI infrastructure provider reportedly valued at $10 billion, has signed on as a launch customer. The satellite will also carry bitcoin mining ASICs alongside an Nvidia Blackwell GPU and an AWS server blade — a payload mix that tells you Starcloud is not waiting for a single customer profile to materialize.
Starcloud-3, the production unit, is 3 tons and 200 kilowatts. One Starship launch could carry about 50 of them, delivering roughly 10 megawatts of computing capacity per flight. SpaceNews reported the company targets a mid-to-late 2028 Starship deployment. The chassis is already welded. A new 3,000-square-meter production facility in Woodinville, Washington is being built for the assembly line.
That is a large gap between a welded chassis and a reliable Starship cadence.
Heat is the reason this is hard. In orbit, there is no convection — only radiation. Jensen Huang, Nvidia's CEO, said at GTC 2026: "In space, there's no convection, there's just radiation, and so we have to figure out how to cool these systems out in space." Philip Johnston, Starcloud's CEO and a former McKinsey consultant who worked on satellite programs for national space agencies, told Sequoia that heat management consumes roughly 70 percent of the company's engineering time. The company has run radiation hardness testing at a cyclotron proton beam accelerator in Knoxville and a heavy ion accelerator at Brookhaven National Laboratory.
The power cost math only works at Starship economics. Johnston has said Starcloud-3 could deliver power at approximately $0.05 per kilowatt-hour — competitive with terrestrial data centers — but only if launch costs fall to around $500 per kilogram. TechCrunch reported he acknowledged the current reality directly: "We are not going to be competitive on energy costs until Starship is flying frequently." He expects commercial Starship access in 2028 or 2029. Falcon 9, the vehicle currently launching everything, runs roughly $3,600 per kilogram. The $170 million Series A is not a bet on today's launch economics. It is a bet on someone else's launch economics.
Ars Technica ran the numbers: the cost per kilogram to orbit must fall well below $1,000 for orbital data center economics to work at all. The Space Shuttle was over $60,000 per kilogram. Falcon 9 brought that down below $5,000. Starship is supposed to get it lower still. That trajectory has happened before in launch — but the jump from a working 60-kilogram satellite to a constellation that moves the needle on global compute demand requires deploying something closer to 100 gigawatts of orbital capacity. The comparison is instructive: SpaceX's Starlink, with 10,000 spacecraft, produces roughly 200 megawatts of power. Data centers currently under construction in the United States alone exceed 25 gigawatts. TechCrunch noted that the number of advanced GPUs currently in orbit is in the dozens, while Nvidia sold nearly 4 million to terrestrial hyperscalers in 2025.
The elephant in the room is SpaceX itself. The company filed an application with the Federal Communications Commission in January 2026 for permission to deploy up to one million satellites as orbital AI data centers. CNBC reported the plan, which has drawn opposition from scientists over orbital debris and light pollution concerns. SpaceX CEO Elon Musk unveiled a design called AI Sat Mini at an Austin event in March, each satellite providing 100 kilowatts for onboard AI processors. An illustration SpaceX shared showed the spacecraft at scale next to Starship V3 — if drawn to the same proportions, the AI Sat Mini would be over 170 meters long. Starship itself stands 124 meters tall. SpaceNews reported SpaceX is also building Terafab, a chip fabrication project near Austin that Musk said would produce one terawatt of processors annually — roughly 50 times what all current advanced chip manufacturers combined can produce.
Johnston has been direct about the distinction: SpaceX is building for Grok and Tesla workloads, and is unlikely to become a third-party cloud infrastructure competitor. TechCrunch reported his assessment that SpaceX is unlikely to operate as "an energy and infrastructure player" serving external customers. Whether that assessment survives contact with Musk's stated ambition for one million orbital data centers is a different question.
SpaceX and xAI completed a merger in early 2026, forming a combined entity valued at approximately $1.25 trillion. CNBC reported the deal positions Musk to compete directly with OpenAI, Google, and Meta in AI while preparing SpaceX for a potential IPO later in the year.
The physics of the thing is not subtle. Without atmospheric cooling, all chip waste heat must be radiated away. That imposes hard scaling limits on how much compute you can pack into any individual spacecraft regardless of how many you launch. The orbital data center problem is not ultimately a launch cost problem or a manufacturing problem. It is a thermodynamics problem wearing a launch cost disguise.
Starcloud has shipped something real. The H100 is running. The training happened. The radiator on Starcloud-2 is larger than anything that has flown commercially before. If Starship works and launch costs actually reach the $500 per kilogram range, the energy economics of orbital compute become genuinely interesting — solar power in orbit delivers roughly 10 times the kilowatt-hours per year as the same capacity on the ground, unconstrained by weather, night, or land use.
That is a real bet. The $170 million buys Starcloud enough time to find out whether the rocket shows up. Johnston is betting Starship launches 50 of his satellites at a time starting in 2028. The rest of the math — the $0.05 per kilowatt-hour, the 88,000-satellite FCC filing, the production line in Woodinville — depends entirely on that timeline holding.
The history of rocketry includes many timelines that did not hold.