One of the world’s fastest supercomputers, designed specifically for artificial intelligence workloads, is available online today at the National Energy Research Scientific Computing Center, located at Lawrence Berkeley National Lab in California.
The Perlmutter system, according to Nvidia Corp., which has many of the graphics chips used, is “the world’s fastest” when it comes to dealing with the 16- and 32-bit mixed-precision mathematics used by AI applications, Nvidia says. Tasked with addressing some of the toughest scientific challenges in astrophysics and climate science, such as creating a 3D map of the universe and investigating atomic interactions for green energy sources, Nvidia said.
The system is a Cray supercomputer built by Hewlett Packard Enterprise Co. with severe processing power. It̵7;s powered by the 6,159 Nvidia A100 Tensor Core graphics processor, the most advanced graphics processor Nvidia has ever built.
Nvidia says it makes Perlmutter the world’s largest A100 GPU-powered system, capable of delivering nearly four exaflops, or trillions of units per second, the benchmark of AI performance. “We live in a different age in AI.” Dion Harris, Nvidia’s senior product marketing manager focused on accelerated computing for HPC and AI, said in a press release.
In a blog post, Harris said researchers will use Perlmutter to assemble what would be the largest 3D map of the universe by processing data from the Dark Energy Spectroscopic Instrument DESI as it is known to capture. Up to 5,000 galaxies in a single exposure.
The idea is that by creating a 3D map of the universe, scientists will be able to learn more about “dark energy”, a mysterious physics that is said to accelerate the expansion of the universe. The supercomputer is appropriately named after astrophysicist Saul Perlmutter, who won the Nobel Prize for his work leading to the discovery of dark energy in 2011.
Harris said Perlmutter’s incredible processing power will be used to analyze dozens of exposures from DESI each night to help astronomers decide which part of the sky to point the next day.
“Providing a year’s worth of data for distribution would have been weeks or months in previous systems, but Perlmutter should help them get the job done in a few days,” Harris said.
Processing of interactions between atoms
That is the only work Perlmutter will be put into use. NERSC is planning to build a new supercomputer for more than 7,000 researchers around the world to develop projects in many other scientific fields. One of the main points of interest is materials. Science, where researchers want to discover and understand how atomic interactions can help create better batteries and new biofuels.
Atomic interactions are an incredibly difficult challenge, even for conventional supercomputers, Harris said.
“Traditional supercomputers can hardly handle the mathematics required to create a few nanosecond atomic simulations with programs such as Quantum Espresso,” he explains, “but by combining high-precision simulations with high-precision simulations, they can be simulated. Machine learning enables scientists to study more atoms over a longer period of time. “
Perlmutter’s A100 Tensor Cores are ideal to help with this as it can accelerate both double-precision decimal numbers for simulations and compound precision calculations required for deep learning, ”Harris said.
Wahid Bhimji, who serves as head of data and analytics services at NERSC, said AI for science is an area of tremendous growth and proof of concept is rapidly moving towards manufacturing use cases in fields such as particle physics. And bioenergy
“People are exploring larger and larger neural network models and demand more efficient access to resources,” said Perlmutter, with A100 GPUs, all flash file systems, and data streaming capabilities. It has been well timed to meet this demand for AI, ”he added.
NERSC said researchers who believe they have a compelling challenge that Perlmutter can help with decryption, supercomputer requests can be sent.
Perlmutter is available now and will gain more computing power in the near future with a “second phase” scheduled to add more GPUs later this year.
Nvidia has been instrumental in building many of the world’s most powerful supercomputers.Other systems using Nvidia’s A100 GPU include the new Hawk system at the High-Performance Computing Center Stuttgart in Germany, the Selene supercomputer at the National Laboratory. Argonne is used to research a method to combat COVID-19 and another AI-focused supercomputer, called Leonardo, installed at the Italian Inter-University Research Center CINECA.
With reporting from Robert Hof.
Image: Nvidia and DESI
Since you came here …
Show your support for our mission by subscribing to our YouTube channel in one click (below). The more members we have, the more relevant YouTube will recommend content on new organizations and technologies that are relevant to you. Only thank you!
Support our mission: >>>>>> Register now >>>>>> Go to our YouTube channel
… We would like to tell you about our mission and how you can help us achieve it. SiliconANGLE Media Inc.’s business model is based on the real value of its content, not advertising. Unlike many online publications, we don’t have paywalls or banner ads because we want our journalism to be open without influence or need to chase traffic.SiliconANGLE journalism, reporting and commenting, complete with non-scripted live video from our Silicon Valley studio and the global trotting video team at The Cube – Work hard, time and money Maintaining high quality requires a sponsor’s support that aligns with our vision for ad-free journalism content.
If you enjoy reporting, interviews, videos and other ad-free content here, please take a moment to check out a preview of our sponsored video content. Tweet your supportAnd come back to silicon.