What is High-Performance Computing?

This is an introduction to High-Performance Computing.

High-Performance Computing (HPC) is a use of parallel processing concepts by aggregating computing power to run heavy and large-scale applications with high throughput and efficiency. The term applies especially to systems that function above a teraflop (1012 floating point operations per second). One of the best-known types of HPC systems is a supercomputer. A supercomputer consists of thousands of compute nodes called clusters which run parallel programs for time-intensive computations that work together to complete one or more tasks. 

 

To build an HPC architecture, compute nodes in a cluster are networked together where software programs and algorithms run simultaneously one these servers. Moreover, the cluster is also networked to an extremely fast data storage to capture the output. Therefore, the overall performance of an HPC infrastructure is highly dependent on the interoperability and the ability of each of its components (compute nodes, storage, network connection) in keeping pace with the other. 


Importance of High-Performance Computing:


In the modern times, with the advent of faster processors and memory systems, organizations and research groups invest in lightning-fast, highly reliable IT infrastructure to process, store and analyze massive amounts of data.  This infrastructure maybe deployed on premises, at the edge or on cloud and are used for various purposes across a wide range of industries: 

  1. Research Labs: Help scientists in fields ranging from molecular biology, process engineering to nuclear physics 
  2. Financial Services (FinServ): Used to track real-time stock trends and automate trading 
  3. Media and entertainment: Used to edit feature films, render special effects and stream live events at large scale 
  4. AI and Machine Learning: Provide intelligent and automated tech support, fraud detection, self-driving vehicles etc. 

High-Performance Computing at UIC: 


UIC’s first and currently in-production HPC (High-Performance Computing) cluster, Extreme, provides researchers a powerful machine with 3,500 cores, 24.5 Terabytes of memory and 1.25 Petabytes of local raw storage, of which 275 Terabytes is raw fast scratch storage. Extreme supports more than 150 researchers across 25 different research groups from 5 colleges. As the result of unprecedented collaboration across several stakeholders, Extreme is built on partnership model where multiple departments and colleges invested to allow their affiliated faculty to use this resource. Extreme is housed in the Roosevelt Road Building (RRB) Data Center with 24/7/365 video surveillance, dedicated power supply, backup power, cooling and fire suppression systems. 

 

Find out more at: https://acer.uic.edu/ 

 

 

 




Keywords:HPC, Big Data, Parallel Computing, Clusters, Linux, Research, Data, Compute, Cloud   Doc ID:93329
Owner:Himanshu S.Group:University of Illinois at Chicago ACCC
Created:2019-07-23 16:08 CSTUpdated:2019-08-09 07:45 CST
Sites:University of Illinois at Chicago ACCC
Feedback:  0   0