High Performance Computing most generally refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business.
Scientist need HPC because they hit a tipping point.
At some point in research, there is a need to:
- Expand the current study area (regional → national → global)
- Integrate new data
- Increase model resolution
But … processing on my desktop or a single server no longer works
Some typical computational barriers:
- Time – processing on local systems is too slow or not feasible
- CPU Capacity -- Can only run one model at a time
- Develop, implement, and disseminate state-of-the-art techniques and tools so that models are more effectively applied to today’s decision-making
- Management of Computer Systems – Science Groups don’t want to purchase and manage local computer systems – they want to focus on science
“I need to do multiple sampling events of multiple simulations. No way can my existing system pull that off in a timely fashion, if at all.”
~David Warner, Research Fisheries, Biologist
“We had an 80 node cluster here in Golden several years ago, but when I left for Memphis, nobody wanted to manage it. Now we have an overworked 12 node server.””
~Oliver Boyd, Research Geophysicist
What is a Supercomputer?
A supercomputer is one large computer made up of many smaller computers and processors
- Each different computer is called a node
- Each node has processors/cores
- Carry out the instructions of the computer
- With a supercomputer, all these different computers talk to each other through a communications network
Supercomputers give you the opportunity to solve problems that are too complex for the desktop. It might take hours, days, weeks, months, years but if you use a supercomputer, it might only take minutes, hours, days, or weeks.