The headline writes itself: startup raises $170 million to build data centers in orbit, hits a billion-dollar valuation, and suddenly "space compute" enters the venture lexicon. Cue the rocket emojis and Elon comparisons.
Here is what actually happened. Starcloud, a Redmond, Washington company that graduated from Y Combinator 17 months ago, closed a Series A led by Benchmark and EQT Ventures that values it at $1.1 billion. The company has raised $200 million total. It launched its first satellite carrying an Nvidia H100 GPU in November 2025 and, according to TechCrunch, used it to train an AI model in orbit and run a version of Gemini. That is a real technical milestone. It is also, by the company's own admission, nowhere near commercially competitive.
The real story is energy, not orbit
Forget the satellites for a moment. The force driving this raise is terrestrial, not celestial. At the end of 2025, data centers requiring 241 gigawatts of power capacity were in the pipeline in the United States alone, according to Fortune. That is more electricity than most countries consume. OpenAI has announced plans for facilities needing more than 30 gigawatts of power in total, more than the largest recorded demand for all of New England, according to The Atlantic. BlackRock's Larry Fink said it plainly this month: the real bottleneck is physical infrastructure, particularly electricity. Hyperscalers are spending over $600 billion in 2026 on data center buildout, and they still cannot get power connected fast enough.
This is the pattern. Every computing paradigm eventually hits an energy wall. Mainframes hit it in the 1970s, which is partly why distributed computing won. Bitcoin mining hit it around 2018, sending miners to Iceland, Texas, and anywhere with cheap kilowatt-hours. Now AI training and inference are hitting the same wall, except the demand curve is steeper than anything before it.
Starcloud's pitch is that low Earth orbit offers near-continuous solar exposure. No land disputes. No grid interconnection queues. No cooling towers. CEO Philip Johnston told TechCrunch the company targets power costs around $0.05 per kilowatt-hour, but only if commercial launch costs reach roughly $500 per kilogram. That requires SpaceX's Starship to be flying regularly, something Johnston himself expects won't happen until 2028 or 2029.
"We're not going to be competitive on energy costs until Starship is flying frequently," Johnston told TechCrunch. That is an unusually honest statement from a founder who just hit unicorn status.
Who benefits and who loses
The winners right now are not orbital compute companies. They are Nvidia and SpaceX. Nvidia unveiled its Vera Rubin Space-1 chip modules at GTC two weeks ago, according to Tom's Hardware, promising 25 times the AI compute of an H100 for orbital workloads. Nvidia sells picks and shovels whether the gold rush is underground or in orbit. SpaceX controls the launch economics that make or break every space compute business plan. It also filed with the FCC for permission to build and operate a million satellites for its own distributed compute network.
Starcloud's Johnston acknowledges the tension. He told TechCrunch that SpaceX is "mainly planning on serving Grok and Tesla workloads" and positions Starcloud as an energy and infrastructure player rather than a direct competitor. Maybe. But any startup whose entire cost model depends on a competitor's rocket should make investors nervous.
The companies that lose, at least in narrative terms, are terrestrial data center developers stuck in permitting hell. SoftBank just took a $40 billion bridge loan to fund its AI infrastructure ambitions. France's Mistral secured $830 million in debt financing from seven banks to buy 13,800 Nvidia chips and build a data center near Paris, according to Reuters. These are real bets on the ground, and they face real constraints: grid capacity, water for cooling, zoning boards, NIMBY opposition. Orbital compute does not solve those problems today, but it reframes the conversation. When banks are lending hundreds of millions for chip purchases and founders are raising at billion-dollar valuations for satellite constellations, the market is telling you that energy access is now a first-order strategic concern.
What we don't know yet
- Whether space-based GPUs can maintain reliability over multi-year missions. Starcloud's own Nvidia A6000 failed during launch. A single satellite proving a concept is not a constellation running production workloads.
- How the economics actually pencil out. Starlink's 10,000 satellites generate roughly 200 megawatts of energy, according to TechCrunch. Data centers with over 25 gigawatts are under construction in the U.S. alone, per Cushman and Wakefield. That gap is staggering.
- Whether the synchronization problem is solvable. Large AI training runs require thousands of GPUs working in lockstep. Doing that across satellites in formation would require reliable laser links that do not yet exist at the needed scale. Most companies in this space expect to handle only simpler inference tasks for years before attempting training workloads.
What this looks like in five years
Starcloud is not alone. Aetherflux is reportedly raising at a $2 billion valuation. Google has Project Suncatcher. Aethero launched Nvidia's first space-based Jetson GPU in 2025. The field is forming.
But forming is not winning. The honest comparison is not "data centers in space" versus "data centers on the ground." It is "a few dozen GPUs in orbit" versus "nearly 4 million Nvidia GPUs sold to terrestrial hyperscalers in 2025 alone," as TechCrunch noted. The scale mismatch is five orders of magnitude.
Here is my prediction: orbital compute becomes a real, revenue-generating niche within three years, handling specific inference workloads for satellite operators and remote sensing companies. Starcloud's first customer, processing data for Capella Space's radar satellites, is already that business. But it does not meaningfully dent terrestrial data center demand this decade. The energy crisis on the ground gets solved the boring way: nuclear restarts, natural gas plants behind the meter, and enormous capital expenditure on grid interconnection.
Space compute is real infrastructure now. It is just very small infrastructure with a very long growth curve. The billion-dollar valuation is a bet on what launch economics look like in 2030, not what orbital compute delivers in 2026. If you are building, operating, or investing in AI infrastructure today, the right question is not whether this works. It is whether you can afford to wait and find out.
Jules Okonkwo covers technology for The Daily Vibe.



