News

Garber Announces Advisory Committee for Harvard Law School Dean Search

News

First Harvard Prize Book in Kosovo Established by Harvard Alumni

News

Ryan Murdock ’25 Remembered as Dedicated Advocate and Caring Friend

News

Harvard Faculty Appeal Temporary Suspensions From Widener Library

News

Man Who Managed Clients for High-End Cambridge Brothel Network Pleads Guilty

Harvard’s Kempner Institute Expands Academic Computing Cluster, Adds Nearly 400 GPUs

The Kempner Institute for the Study of Natural and Artificial Intelligence was founded through a $500 million donation by Mark Zuckerberg and Priscilla Chan '07.
The Kempner Institute for the Study of Natural and Artificial Intelligence was founded through a $500 million donation by Mark Zuckerberg and Priscilla Chan '07. By Addison Y. Liu
By Camilla J. Martinez and Tiffani A. Mezitis, Crimson Staff Writers

Harvard’s Kempner Institute for the Study of Natural and Artificial Intelligence purchased nearly 400 advanced graphics processing units last month to bolster its computational cluster, particularly for training generative AI models.

GPUs are computing units with central processing, memory, and networking capabilities. With the addition of the units — NVIDIA’s H100-80 GB GPUs — the Kempner Institute’s cluster has become one of the world’s largest academic computing clusters. Computing clusters are comprised of a set of computers that work together to more efficiently perform computationally intensive tasks.

The Kempner Institute was launched in December 2021 with a mission to “understand the basis of intelligence in natural and artificial systems.” The institute established its initial computing cluster with 144 A100-40 GB GPUs in spring 2022.

The cluster encompasses a diverse array of hardware and software technologies, specializing in advanced machine learning and networking capabilities. The cluster enables large-scale experimentation and research in natural and artificial intelligence.

“We have a brilliant community of students and scientists pursuing ambitious machine learning projects and asking big and important questions, but some of these projects are only possible if we have access to the level of technology available in industry,” Elise Porter, executive director of the Kempner Institute, wrote in an emailed statement.

The H100 model offers three times more training and inference speed than the original A100 model and is networked using Infiniband, an internet computing structure. The new addition employs 1,600 gigabyte per second transfer speeds that train language models quicker and more efficiently.

This expansion is specifically designed to train generative AI agents, with the increase in computational power enabling faster development and testing of new models.

“Generative AI is arguably one of the most important technological innovations of our generation, and these models are built on large clusters of highly networked super-fast processors,” Porter wrote.

“Building an academic cluster at scale, with GPUs this powerful, makes it possible for scientists at the Kempner to pursue cutting-edge machine learning research here at Harvard,” she added.

The processing units are housed in the Massachusetts Green High Performance Computing Center, an intercollegiate computing facility in western Massachusetts.

—Staff writer Camilla J. Martinez can be reached at camilla.martinez@thecrimson.com. Follow her on X @camillajinm.

— Staff writer Tiffani A. Mezitis can be reached at tiffani.mezitis@thecrimson.com.

Want to keep up with breaking news? Subscribe to our email newsletter.

Tags
TechnologyFront Middle FeatureFeatured ArticlesArtificial Intelligence