Not just techies, but the scientific community loves GPUs

Supercomputers, built in the 70s and 80s, depended on the SIMD (Single instruction multiple data) paradigm. In the 90s, there were other cheaper and faster forms of MIMD platforms like clusters and GRIDs, and SIMD was slowly being pushed aside. However, it has made a comeback today, driven by its computing power and the economic cost of building architectures that have turned the heads of the scientific community.

Nowadays, modern research has become very demanding due to the large amount of data and processes combined with the complex algorithms needed to conduct a study. This has triggered the need to adopt GPU in all areas as it gives you a high performance system with massive computing power and hardware configuration.

In order to better understand, let’s take a look at the scientific community and their sentiments for GPU adoption.

.The scientific community and its obsession with GPUs

GPU computing was born when the scientific community wanted to use its raw processing power for intensive calculations. It comes after the power of graphics processing units (GPUs) was compared to the computing power of hundreds of CPU cores. But, due to its architecture, this power can be fully utilized using specialized algorithms.

The characteristics of graphics computations are extremely parallel and highly arithmetic, and this has guided the development of advanced GPUs with little or no cache at all. Another approach to high-performance computing (HPC) developed rapidly as the use of general-purpose multi-core devices such as coprocessors and integrated many-core GPUs (MICs) evolved. In particular, GPUs have gained popularity due to their price and highly efficient parallel multi-core capabilities.

This helps build a series of cheaper, power-efficient systems and enables tera-scale performance on common workstations. But, the tera-scale output gives a theoretical peak acquired by distributing the workload between each core. One of the most important factors between GPUs and HPCs is that the former can provide quality output without scheduling tasks or sharing confidential data.

How can GPUs help scientific research?

Systems that leverage GPUs help scientists quickly investigate data run examinations and simulations. GPU acceleration helps reduce output time, resulting in faster results. These faster results can lead to many breakthroughs and, from a human point of view, it generates less stress and saves a lot of material and energy.

GPU operations are vectorized, where an operation is performed on up to four values ​​at once. Historically, CPU used hardware-managed caches, and previously GPUs only provided software-managed local memories. However, GPUs are increasingly being used for general-purpose applications and are now being manufactured with hardware-managed multi-level caches, making them ready for mainstream computing. It also has large registry files and helps reduce context switching latency.

As a forerunner in large-scale GPU production, NVIDIA had introduced its colossal parallel architecture known as the Unified Computing Device Architecture (CUDA) in 2006. This led to the evolution of the GPU programming model . It uses a parallel computing engine from Nvidia GPU to solve massive computational problems. CUDA programs can be executed in a host (CPU) and a device (GPU).

CUDA has three important parts that must be used effectively to achieve maximum computing capacity. Grids, blocks, and threads form the CUDA architecture and run many parallel threads. Under these three levels, the hierarchical architecture is independent of the execution level. A grid consists of blocks of threads that run independently. Meanwhile, blocks are organized in a 3D array of threads and have a unique block ID (blockIdx). The kernel function then executes the threads and also has a unique thread ID (threadIdx). GPU devices have multiple memories to run threads. However, the total block size is 1024 threads. Each thread can access variables from local memory and GPU registers. Meanwhile, registers have the largest bandwidth and each block has its shared memory of size 16 KB or 48 KB.

Some examples of GPUs deployed in various fields are listed below.

Astrophysics: Researchers typically work with datasets of 100 terabytes or larger, and GPU acceleration and AI help them separate signal from noise in these massive datasets. It is also used to run large scale simulations of the universe and helps to understand phenomena such as neutron star collisions etc.

Medicine and biology: In the medical field, deep learning tools are used to help doctors identify diseases and abnormalities, for example, by spotting glioblastoma tumors using brain scans. Meanwhile, developers can use supercomputing resources to create more complex models and subsequently increase the accuracy of cancer diagnosis.

Drugs: The discovery of critical compounds for drug candidates is computationally demanding, time-consuming and expensive. But, with the help of GPU-accelerated systems, researchers can quickly simulate protein folding and narrow down candidates to test.

Environment: Widely used for weather modeling and energy research, scientists rely on high-fidelity simulations to inspect complex natural systems. Researchers use it to predict weather, events, and earthquakes. It works effectively to give accurate information on farming projects, inform precision farming projects, and research potential energy sources like nuclear fusion.

Sam D. Gomez