Topics Map > Research > Research Computing > High-Performance Computing (HPC)
What is High-Performance Computing?
This is an introduction to High-Performance Computing.
High-Performance Computing (HPC) is a use of parallel processing concepts by aggregating computing power to run heavy and large-scale applications with high throughput and efficiency. The term applies especially to systems that function above a teraflop (1012 floating point operations per second). One of the best-known types of HPC systems is a supercomputer. A supercomputer consists of thousands of compute nodes called clusters which run parallel programs for time-intensive computations that work together to complete one or more tasks.
To build an HPC architecture, compute nodes in a cluster are networked together where software programs and algorithms run simultaneously one these servers. Moreover, the cluster is also networked to an extremely fast data storage to capture the output. Therefore, the overall performance of an HPC infrastructure is highly dependent on the interoperability and the ability of each of its components (compute nodes, storage, network connection) in keeping pace with the other.
Importance of High-Performance Computing:
In the modern times, with the advent of faster processors and memory systems, organizations and research groups invest in lightning-fast, highly reliable IT infrastructure to process, store and analyze massive amounts of data. This infrastructure maybe deployed on premises, at the edge or on cloud and are used for various purposes across a wide range of industries:
- Research Labs: Help scientists in fields ranging from molecular biology, process engineering to nuclear physics
- Financial Services (FinServ): Used to track real-time stock trends and automate trading
- Media and entertainment: Used to edit feature films, render special effects and stream live events at large scale
- AI and Machine Learning: Provide intelligent and automated tech support, fraud detection, self-driving vehicles etc.
High-Performance Computing at UIC:
UIC’s first and currently in-production HPC (High-Performance Computing) cluster, Extreme, provides researchers a powerful machine with 3,500 cores, 24.5 Terabytes of memory and 1.25 Petabytes of local raw storage, of which 275 Terabytes is raw fast scratch storage. Extreme supports more than 150 researchers across 25 different research groups from 5 colleges. As the result of unprecedented collaboration across several stakeholders, Extreme is built on partnership model where multiple departments and colleges invested to allow their affiliated faculty to use this resource. Extreme is housed in the Roosevelt Road Building (RRB) Data Center with 24/7/365 video surveillance, dedicated power supply, backup power, cooling and fire suppression systems.
Find out more at: https://acer.uic.edu/