Ice Lounge Media

Ice Lounge Media

Eric Schmidt: Why America needs an Apollo program for the age of AI

IceLoungeMedia IceLoungeMedia

The global race for computational power is well underway, fueled by a worldwide boom in artificial intelligence. OpenAI’s Sam Altman is seeking to raise as much as $7 trillion for a chipmaking venture. Tech giants like Microsoft and Amazon are building AI chips of their own. The need for more computing horsepower to train and use AI models—fueling a quest for everything from cutting-edge chips to giant data sets—isn’t just a current source of geopolitical leverage (as with US curbs on chip exports to China). It is also shaping the way nations will grow and compete in the future, with governments from India to the UK developing national strategies and stockpiling Nvidia graphics processing units. 

I believe it’s high time for America to have its own national compute strategy: an Apollo program for the age of AI.

In January, under President Biden’s executive order on AI, the National Science Foundation launched a pilot program for the National AI Research Resource (NAIRR), envisioned as a “shared research infrastructure” to provide AI computing power, access to open government and nongovernment data sets, and training resources to students and AI researchers. 

The NAIRR pilot, while incredibly important, is just an initial step. The NAIRR Task Force’s final report, published last year, outlined an eventual $2.6 billion budget required to operate the NAIRR over six years. That’s far from enough—and even then, it remains to be seen if Congress will authorize the NAIRR beyond the pilot.

Meanwhile, much more needs to be done to expand the government’s access to computing power and to deploy AI in the nation’s service. Advanced computing is now core to the security and prosperity of our nation; we need it to optimize national intelligence, pursue scientific breakthroughs like fusion reactions, accelerate advanced materials discovery, ensure the cybersecurity of our financial markets and critical infrastructure, and more. The federal government played a pivotal role in enabling the last century’s major technological breakthroughs by providing the core research infrastructure, like particle accelerators for high-energy physics in the 1960s and supercomputing centers in the 1980s. 

Now, with other nations around the world devoting sustained, ambitious government investment to high-performance AI computing, we can’t risk falling behind. It’s a race to power the most world-altering technology in human history. 

First, more dedicated government AI supercomputers need to be built for an array of missions ranging from classified intelligence processing to advanced biological computing. In the modern era, computing capabilities and technical progress have proceeded in lockstep. 

Over the past decade, the US has successfully pushed classic scientific computing into the exascale era with the Frontier, Aurora, and soon-to-arrive El Capitan machines—massive computers that can perform over a quintillion (a billion billion) operations per second. Over the next decade, the power of AI models is projected to increase by a factor of 1,000 to 10,000, and leading compute architectures may be capable of training a 500-trillion-parameter AI model in a week (for comparison, GPT-3 has 175 billion parameters). Supporting research at this scale will require more powerful and dedicated AI research infrastructure, significantly better algorithms, and more investment. 

Although the US currently still has the lead in advanced computing, other countries are nearing parity and set on overtaking us. China, for example, aims to boost its aggregate computing power more than 50% by 2025, and it has been reported that the country plans to have 10 exascale systems by 2025. We cannot risk acting slowly. 

Second, while some may argue for using existing commercial cloud platforms instead of building a high-performance federal computing infrastructure, I believe a hybrid model is necessary. Studies have shown significant long-term cost savings from using federal computing instead of commercial cloud services. In the near term, scaling up cloud computing offers quick, streamlined base-level access for projects—that’s the approach the NAIRR pilot is embracing, with contributions from both industry and federal agencies. In the long run, however, procuring and operating powerful government-owned AI supercomputers with a dedicated mission of supporting US public-sector needs will set the stage for a time when AI is much more ubiquitous and central to our national security and prosperity. 

Such an expanded federal infrastructure can also benefit the public. The life cycle of the government’s computing clusters has traditionally been about seven years, after which new systems are built and old ones decommissioned. Inevitably, as newer cutting-edge GPUs emerge, hardware refreshes will phase out older supercomputers and chips, which can then be recycled for lower-intensity research and nonprofit use—thus adding cost-effective computing resources for civilian purposes. While universities and the private sector have driven most AI progress thus far, a fully distributed model will increasingly face computing constraints as demand soars. In a survey by MIT and the nonprofit US Council on Competitiveness of some of the biggest computing users in the country, 84% of respondents said they faced computation bottlenecks in running key programs. America will need big investments from the federal government to stay ahead.

Third, any national compute strategy must go hand in hand with a talent strategy. The government can better compete with the private sector for AI talent by offering workers an opportunity to tackle national security challenges using world-class computational infrastructure. To ensure that the nation has available a large and sophisticated workforce for these highly technical, specialized roles in developing and implementing AI, America must also recruit and retain the best global students. Crucial to this effort will be creating clear immigration pathways—for example, exempting PhD holders in relevant technical fields from the current H-1B visa cap. We’ll need the brightest minds to fundamentally reimagine how computation takes place and spearhead novel paradigms that can shape AI for the public good, push forward the technology’s boundaries, and deliver its gains to all.

America has long benefitted from its position as the global driver of innovation in advanced computing. Just as the Apollo program galvanized our country to win the space race, setting national ambitions for compute will not just bolster our AI competitiveness in the decades ahead but also drive R&D breakthroughs across practically all sectors with greater access. Advanced computing architecture can’t be erected overnight. Let’s start laying the groundwork now.

Eric Schmidt was the CEO of Google from 2001 to 2011. In 2024, Eric & Wendy co-founded Schmidt Sciences, a philanthropic venture to fund unconventional areas of exploration in science & tech.