Elon Musk's AI startup xAI is undertaking an extensive expansion of its AI infrastructure and operations. The company plans to spend $4.7 billion over the next three months from a recently raised $9.3 billion fund, averaging approximately $1.6 billion per month. By 2027, xAI aims to invest $18 billion in capital expenditures for new data centers. The startup has fully prepaid for Nvidia's Blackwell GPUs and allocated $2 billion for upgrades to its Colossus supercomputer, including liquid cooling and water circulation systems.
Since its Series C funding round, xAI's employee salaries have nearly quadrupled. The company is preparing to bring online an additional 110,000 Nvidia GB200 GPUs at its Memphis, Tennessee facility, increasing its total GPU count for AI training to 340,000 units, making it the largest AI supercomputer globally. The GPU inventory includes 150,000 H100s, 50,000 H200s, and 30,000 GB200s. Musk highlighted the challenges of building this massive AI supercluster, noting the rapid retrofitting of the Memphis factory, sourcing of 100,000 H100 GPUs, and installation of advanced power and cooling systems. These efforts support xAI's work on training advanced AI models and humanoid robots. Musk also recently discussed digital superintelligence, multiplanetary life, and practical applications of AI during a 50-minute interview at Y Combinator's AI Startup School.