NEUNST
Earth has an atmosphere. Space has a heat sink. We are building the gigawatt swarm that terrestrial grids cannot sustain.
Terrestrial data centers are fighting a losing war against three constraints: local power grids unable to scale to Gigawatt loads, water availability as cooling towers drain reservoirs, and the heat capacity of air where convection proves fundamentally inefficient. The solution is not better fans. The solution is to delete the atmosphere. At $200/kg to Orbit with Starship V3, the economics of compute invert completely. We stop optimizing for mass. We start optimizing for Surface Area. This is not incremental improvement—this is a phase transition in how humanity deploys intelligence infrastructure.
The Chinchilla scaling law demonstrates that optimal model performance follows a power-law relationship between compute and loss. The empirical relationship is given by:
where:
This means every 10x increase in compute yields predictable capability gains. Our orbital infrastructure targets 10²⁴ FLOPs sustained—enough to train frontier models entirely in space.
FLOPs (log scale) vs. Cross-Entropy Loss
Standard racks melt in a vacuum. We stripped the data center down to the silicon. Each Space Blade contains 16 custom inference accelerators, de-shrouded and undervolted to 250W per chip at the efficiency sweet spot. Cooling is achieved through conductive bonding to Ammonia-filled structural vapor chambers. HBM is scarce, so we use commodity NAND: on-die SRAM and HBM for active kernels backed by 200TB of NVMe SSDs bonded directly to the PCIe bus. Hierarchical pre-fetching streams Mixture-of-Experts weights from cold flash to hot silicon in microseconds. The interconnect we call Rainbow Spine—lasers are fragile and copper is heavy, so we use Hollow-Core Fiber Optics running the length of the ribbon with passive DWDM and zero-latency prisms. Each node is assigned a light wavelength; passive glass prisms drop data to the chip without electrical switching overhead.
We export intelligence, not data. The bandwidth bottleneck of space prevents training which requires Petabyte transfers, but it is perfect for reasoning which requires only token transfers. The loop is simple: uplink the user prompt at 1KB, execute 10,000 GPU-hours of chain-of-thought reasoning, downlink the answer at 10KB. We do not downlink the memory state—we downlink the intelligence. Using commercial optical terminals, we plug directly into the Starlink backhaul. We are just another node on the mesh.
Phase 0 is currently underway: a single ESPA-class node at 150kg launching on Falcon 9 Transporter to validate commercial NAND flash survival rates in SSO radiation belts and passive ammonia cooling physics. Phase 1 on Starship Block 1: a 10-slab ribbon at 250kW to prove deployment mechanics and verify the zig-zag unspooling sequence. Phase 2 on Starship V3: the full 170-slab ribbon at 4MW delivering commercial-scale inference at $0.01 per million tokens.
We are hiring engineers, not astronauts. We do not care about your passion for space exploration. We care about your ability to maximize FLOPS per Watt.