A cluster of 300 Nvidia H100 GPUs will boost the University’s robust computing infrastructure, accelerate exploration of generative AI, and help keep AI research in the public sphere.
A new cluster of 300 Nvidia H100 GPUs at Princeton is poised to accelerate AI research at scale and build on the University’s strengths across academic disciplines. The cluster is one of the largest of its kind for university research.
Among other applications, the new cluster will facilitate larger projects that are part of the Princeton Language and Intelligence (PLI) initiative’s mission, which enables research at scale into large language models (LLMs) and other aspects of generative AI.
The Princeton cluster arrives at a crucial time in AI research, when industry’s massive computing resources have mostly driven the direction of AI discourse. The multimillion-dollar investment was primarily funded by the University endowment.
Princeton’s investment will help keep AI research in the public sphere, said Sanjeev Arora, the Charles C. Fitzmorris Professor in Computer Science at Princeton and director of PLI. “If you don’t have the compute, you cannot do things at scale and you can’t join the conversation,” Arora said.
The University's existing Della cluster consists of A100 GPU units. The next-generation H100s are a muscular addition. “One of the things we have learned over the last few years for generative AI is that scale really matters quite a bit,” said Karthik Narasimhan, assistant professor of computer science and associate director of PLI.
Narasimhan said the new cluster is exciting because it enables research on at least medium-scale models. Without such heavy-duty resources, academic researchers would be “restricted to very small models and trying to come up with techniques which may or may not scale up,” he said, and research universities could be boxed out of advances in generative AI.
He said the robust cluster not only supports bigger models but also allows “more room for experimentation. You can try multiple things in parallel.”
An Interdisciplinary Focus
The new cluster will facilitate more large-scale team projects, Arora said. The goal is to develop models, datasets and methods to specifically adapt AI for academic users.
PLI has awarded seed grants for 2024 to a roster of 14 projects using large AI models, enabling scholars to weave AI into their research across disciplines and as part of interdisciplinary teams. The researchers include faculty from computer science, neuroscience, politics, economics, English, history, Near Eastern studies, sociology, psychology, electrical and computer engineering, operations research and financial engineering, and more.
Danqi Chen, assistant professor of computer science and associate director of PLI, said the new Nvidia cluster will also support fundamental AI research, including the fine-tuning of existing models.
Chen pointed to the Princeton-grown “Language Models as Science Tutors” project as an example of fine-tuning existing models for targeted applications. The project also showcases how researchers with different domain expertise can collaborate, she said.
Chen said the new cluster enables scholars within academia to chart their own course in AI, instead of simply following the path paved by industry.