Abstract

Parallel programming allows the speed of computations to be increased by using multiple processors or computers working jointly on the same task. In parallel programming difficulties that are not present in sequential programming can be encountered, for instance communication between processors. The way of writing a parallel program depends strictly on the architecture of a parallel system. An efficient program of this kind not only performs its computations faster than its sequential version, but also effectively uses the CPU time. Parallel programming has been present in high-energy physics for years. The lecture is an introduction to parallel computing in general. It discusses the motivation for parallel computations, hardware architectures of parallel systems and the key concepts of a parallel programming. It also relates parallel computing to high-energy physics and presents a parallel programming application in the field, namely PROOF. 1 Parallel computing 1.1 Motivation The speed of modern general purpose processors is of the order of 10 billion floating point operations per second (10 GFLOPS). This number seems to be unreachably high, but there are applications which have much larger requirements — weather forecasting, climate changes prediction, finance analysis, earthquake simulation, protein folding [1] and others. The accuracy of such simulations may often be improved by increasing the amount of computations. In many cases it is required that the result is known after a certain amount of time, so the execution time is limited. An example application of this kind is weather forecasting, which has to be ready for evening news, or climate change simulation, which must be ready for instance before preparing an annual report of a grant. Traditional computer programs are sequences of instructions executed by a processor (also called a central processing unit — CPU) one by one. This model imposes a limit on the amount of computations performed per second. To go beyond this limit multiple CPUs or multiple computers have to be used simultaneously. Some problems may be decomposed into smaller, independent sub-problems. In such case, each processor gets its own part and solves it independently of the other ones. This may be done, for instance, in High-Energy Physics area, with a simulation of a detector’s response for 10000 events — the events may be distributed among processors. Other problems, for instance the one presented in section 1.2, cannot be decomposed in such a way. This means that processors have to cooperate to solve a problem of this kind. A parallel program uses multiple CPUs for computations and manages the communication between the processors. Parallel programs may be run on various architectures. The architectures are often divided into two categories: single instruction, multiple data (SIMD) and multiple instruction, multiple data (MIMD) [2] (see also [3] and [4]). In the first case, a single CPU instruction operates on multiple data — usually arrays of numbers. In the second case, the programs for each processor are separate, and a single instruction may operate only on single operands.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.