Supercomputers take classical computing to the next level. At their core, supercomputers are classical computers, but they are much bigger and faster. Instead of a handful of processors, supercomputers use hundreds of thousands of large processors working together.
But supercomputers aren’t the only way to scale up classical computers. Friedman says that many tasks once reserved for supercomputers are now handled by data centers, which are warehouses of connected servers. While supercomputers are designed to solve one massive problem at incredible speed, data centers such as Google’s or Amazon’s cloud server farms are built to handle many independent tasks at once, including hosting websites and cloud storage or running AI tools. Their power comes from their large scale rather than extreme speed operating on a single problem.
While these systems may be built for different goals, classical computers, supercomputers, and data centers all share the same basic components and principles.
“The technology is optimized differently, the architectures are optimized differently, and the software is optimized differently, but, at the same time, it’s still transistors, they still compute in memory, it’s still binary, and the programming languages are ones you’ll recognize,” Friedman says.
POWER UP: In addition to being one of the world’s most powerful computer systems, URochester’s Conesus supercomputer is among the most energy efficient, as ranked by the TOP500 list. (University of Rochester photo / J. Adam Fenster)
Strengths: Supercomputers are good at solving computational-heavy problems that would take ordinary computers way too long to solve. They can predict weather, model pandemics, design new structures, and simulate complex systems. At Rochester’s Laboratory for Laser Energetics, for example, researchers use the supercomputer Conesus to model physics phenomena at extreme temperatures and densities like those at the center of stars, including simulating fusion to advance clean energy and national security applications, analyzing experimental data, and planning more effective experiments.
“The supercomputer Conesus has dramatically enhanced LLE’s computational capabilities, allowing for an understanding of very complex physics phenomena in three dimensions with unprecedented details,” says Valeri Goncharov, LLE’s theory division director and an assistant professor of research in the Department of Mechanical Engineering. “A significant increase in computational speed also opens up opportunities to use simulation results to train AI models, enabling transformative advances in areas like fusion research.”
Limitations: Despite their massive power, and even with advances in classical computer circuits, architecture, and efficiency, there are still calculations that are beyond the reach of a supercomputer. For instance, supercomputers can simulate simple molecules but simulating complex molecules with high accuracy can overwhelm even the best machines.
“Simulating my favorite molecule—and perhaps your favorite molecule—caffeine, would take a number of transistors roughly equal to the number of silicon atoms in the entire planet Earth,” Nichol says. “We’re actually not far away from individual transistors being the size of an individual atom. If we want to continue the scaling of modern computing technology, we will be confronted with the quantum properties of individual atoms and, in a sense, will be forced into quantum computing.”