Aided by Gemini and Grok AIs
Accelerating AI From Terrestrial to Orbital
Our November 2025 thesis focused on “The Autonomous Network” as a terrestrial play – Full Self Driving (FSD), Optimus, and the data centers powering them all. However, recent supernova developments suggest the “moat” is moving to Low Earth Orbit (LEO). The recent SpaceX-xAI merger, Elon Musk’s open discussion of his strategic vision on this combined business, and Tesla suggest we’re on the threshold of an automation acceleration era.
- The SpaceX-xAI Merger: SpaceX’s acquisition of xAI on February 2, 2026 (valuing the combined entity at $1.25 Trillion) creates the world’s first vertically integrated “Orbital Brain.” By combining launch dominance with frontier artificial intelligence (AI), Musk is attempting to solve the two biggest bottlenecks for earthly autonomy: energy and heat.
- The 1 Million Satellite Filing: SpaceX’s Federal Communications Commission (FCC) filing for an “Orbital Data Center” system of up to 1 million satellites (powered by constant solar and cooled by the vacuum of space) signals a paradigm shift in the future cost of compute. This first of its kind filing has already been posted for public comment by the FCC for input by March 6, 2026. The satellites would operate at altitudes between 500 and 2,000 kilometers in sun-synchronous inclinations to achieve near-constant (99%) sunlight for solar power generation. A key feature of the proposed system is reliance on inter-satellite optical links for communications among the satellites, which would then relay data to the ground. SpaceX Starship launch spacecrafts would bring large satellite payloads into orbit to build the data center constellation.
Musk’s dream is to make space the lowest-cost environment for AI inference, bypassing the Earth’s many power grid constraints. Musk has been imagining this space adventure for a while. Walter Isaacson’s biography “Elon Musk” recounts that Elon was just a 14-year old dreamer when he drew crude diagrams for solar-powered energy satellites.
- Implications for Tesla: Tesla is no longer just a car company or even just an AI company; it is primarily an “edge device” network for Tesla AI robots on wheels and robots in human emulation form. The recent surprise $2B investment by Tesla into xAI (now a stake in the SpaceX-xAI combined entity) confirms that Tesla’s FSD and Optimus robots will be direct beneficiaries of this off-planet computing power.
Below is a summary of Elon’s bold vision from the 3-hour Cheeky Pint Podcast with John Collison and Dwarkesh Patel. The world’s foremost engineer and entrepreneur discusses the future of solar energy and data centers, terrestrial data center constraints, artificial intelligence, robotics, critical mineral refining, and US solar panel and semiconductor chip manufacturing.
Musk’s Central Claim
“In 36 months, but probably closer to 30 months, the most economically compelling place to put AI will be space.”
“You can mark my words.”
He predicts that within 2.5–3 years (~mid-2028 to early 2029), space will become the cheapest location for running large-scale AI compute — especially inference and eventually training as well. Note that the March 6, 2026, due date for public comment on SpaceX’s FCC filing indicates how rapidly these regulatory and commercial action may move.
Why This Matters So Much for AI
Musk’s reasoning is driven by physics and scaling realities rather than exotic technological breakthroughs:
1. Earth is fast approaching a hard energy wall
- AI compute demand is growing exponentially (chips/Floating-point Operations Per Second (FLOPs) are doubling roughly every 6–12 months).
- Global non-China electricity generation is basically flat.
- Massive terrestrial solar growth would require enormous land, permits, energy storage/batteries required for nighttime and cloudy days, and transmission/grid infrastructure — all of which face years-long backlogs (interconnection queues, gas turbine lead times out to 2030, state and local permitting processes, and tariffs on imported goods).
- Six U.S. states have introduced bills to place a moratorium on data center construction.
- Engineering Conclusion: It will become physically and regulatorily impossible to build sufficient power generation fast enough for the maintenance and accelerated advance of the largest AI models on Earth.
2. Space removes the main constraints
- Solar power is roughly 5 times more productive per panel in space than on earth (no atmosphere, no night, no adverse weather).
- No need for battery storage (constant sunlight).
- No heavy structural support (gravitational impact is limited so weight is a far less important factor).
- Effectively unlimited clean energy once in orbit.
- Cooling is different (radiation only, no air), but Musk argues it’s manageable and still cheaper overall at extreme scale.
- Engineering Conclusion: The cost per watt in space becomes dramatically lower than anything possible on the third rock from the sun.
3. The economic tipping point
- Musk believes launch costs (via SpaceX Starships) will fall far enough that launching GPUs and solar panels will become cheaper than building equivalent terrestrial power infrastructure.
- SpaceX is iteratively improving Starship’s rocket reusability. Musk forecasts that SpaceX will achieve full and rapid reusability of the Starship system in 2026, enabling both the Super Heavy booster and the ship to be caught and rapidly reused.
- Energy is only about 15% of a data center’s lifetime cost today versus semiconductor chips representing 70% of the cost, but when energy becomes the binding constraint, the equation flips.
- Engineering Conclusion: Once energy is a binding constraint, orbital compute wins the pure economic battle.
Musk’s Scaling Vision
- Elon is not optimizing his scaling function for one variable. He’s optimizing across energy, compute, manufacturing, launch capacity, intelligence, and robotics. All simultaneously with each factor unlock enabling the next factor in a recursive manner, creating an interstellar flywheel.
- Within about 5 years (early 2030s), Musk projects that SpaceX will be launching more AI compute per year than the cumulative total of all AI compute ever built on Earth up to that point.
- Target: Hundreds of gigawatts (GW= 1000 Megawatts (MW), 1 MW satisfies the power needs of 164 U.S. homes), eventually terawatts (TW = 1000 GW) per year in orbit.
- Example: 100 GW of orbital compute would require roughly 10,000 Starship launches per year or roughly 1 launch every ~50–60 minutes if spread evenly.
- Orbital Scaling Advantage: Musk says SpaceX is preparing for 10,000–30,000 launches per year, which would make them a “hyper-hyper-scaler” for AI.
Implications for AI Applications
- Unlimited scaling of frontier models — training and inference no longer gated by terrestrial power, politics, or grid limits.
- xAI (and potentially Tesla) gains decisive advantage: while terrestrial competitors fight over energy interconnect queues and permits, xAI and its partners are unconstrained.
- Gigantic scale enables truly massive real-time AI uses (e.g. global inference, orbital coordination of Optimus and Cybercab fleets, digital human emulation at planetary scale).
- Musk frames this as existential for civilization-scale AI progression. Earth-based solutions will hit a ceiling, while space offers infinite scale.
Challenges Raised in the Interview
Dwarkesh pushed back hard:
- Photovoltaic panels and semiconductor chips still dominate cost. SpaceX will need to acquire sufficient supplies of both before being able to launch them into space.
- Servicing failed GPUs mid-training in orbit sounds very difficult.
- Power is only part of the equation.
Musk’s responses:
The energy bottleneck is so severe that it overrides those concerns once you hit the energy wall. If Earth can’t supply the power, the other costs become secondary. Both SpaceX and Tesla are presently working on manufacturing their own photovoltaic panels. On the compute side, Tesla is actively designing its own chips in partnership with Samsung Electronics. GPUs typically fail in their infancy and servers will be rigorously tested on earth prior to launch.
Early Execution Status as of Feb 7, 2026
- Both Tesla and SpaceX have been tasked with ramping US-based manufacturing of solar panels for space. The design and manufacture of these panels is easier than terrestrial solar in some respects. For example, you don’t have to design orbital panels to withstand severe weather.
- Hiring is already underway for engineers to work on AI satellite and solar panels.
- Tesla is actively working with Samsung Electronics on its own AI5 chip design which is scheduled for mass production in 2027.
- This interview is the most in-depth public explanation of Musk’s warp speed engineering tactics.
Bottom Line for Servant Financial Clients:
Our “Forge Ahead” strategic sleeve allocation remains well positioned for this acceleration in autonomy. The supply chains for these technologies will demand unprecedented volumes of raw materials in this second race to space – steel, aluminum, lithium, nickel, cobalt, graphite, copper, silver, rare earths, silica sand, and fluorine etching solutions for chips and solar.
As a compliment to the Forge Ahead sleeve, we’ve identified an Exchange Traded Fund (ETF) – Baron First Principal ETF (RONB) that is levered to this Musk’s Ad Astra (“to the stars”) AI-driven autonomous future. The ETF has pre-merger exposure to both SpaceX and xAI private securities and Tesla. We’re treating this as a higher risk, opportunistic strategy and adding it more selectively to more risk tolerant models – Core-Satellite Moderate and Core-Satellite Aggressive and similar bespoke client models.
A famous astronaut’s quote seems an apropos way to close this article on Musk’s “to the stars” strategic play.
“That’s one small step for a man, one giant leap for mankind.”
~ Astronaut Nei
