Abstract

Massive, fine-grained parallel computing capabilities will be needed to help researchers effectively use petascale computing environments. In particular, petascale computing will gain performance speed from the parallel processing capabilities of graphics processing units (GPU). The concept behind the general-purpose GPU (GPGPU) is simple: Use the massively parallel architecture of the graphics processor for general-purpose computing tasks. Because of that parallelism, ordinary calculations can be dramatically sped up. GPGPU is being used as a high-performance coprocessor for oil and gas exploration and other applications—and it's much cheaper than a supercomputer. Scientists and researchers benefit from the power of the massively parallel computing architecture. This availability of supercomputing will unlock the answers to previously unsolvable problems in systems ranging from a workstation to server clusters. Using a GPU as a calculation unit may appear complex. It’s not really about dividing up the task to execute into a handful of threads like using a multicore CPU but rather it involves thousands of threads. In other words, to try and use the GPU is pointless if the task isn’t massively parallel, and for this reason, it can be compared to a super calculator rather than a multi-core CPU. An application to be carried out on a super calculator is necessarily divided into an enormous number of threads and a GPU can thus be seen as an economical version devoid of its complex structure. NVIDIA CUDA is a software layer intended for stream computing and an extension in C programming language, which allows identifying certain functions to be processed by the GPU instead of the CPU. These functions are compiled by a compiler specific to CUDA in order that they can be executed by a GPU’s numerous calculation units. Thus, the GPU is seen as a massively parallel co-processor that is well adapted to processing well paralleled algorithms and like in seismic and reservoir simulation. NVIDIA Tesla product line is dedicated to HPC. The Tesla Computing System is a slim 1U form factor which easily scales to solve the most complex, dataintensive HPC problems. Tesla Computing System is equipped with four new generation NVIDIA GPU boards, IEE 754 compliant Double Precision FP, and a total of 16GB video memory. The rack is used in tandem with multi-core CPU systems to create a flexible computing solution that fits seamlessly into existing IT infrastructure.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call