Abstract

Nowadays, we are to find out solutions to huge computing problems very rapidly. It brings the idea of parallel computing in which several machines or processors work cooperatively for computational tasks. In the past decades, there are a lot of variations in perceiving the importance of parallelism in computing machines. And it is observed that the parallel computing is a superior solution to many of the computing limitations like speed and density; non-recurring and high cost; and power consumption and heat dissipation etc. The commercial multiprocessors have emerged with lower prices than the mainframe machines and supercomputers machines. In this article the high performance computing (HPC) through parallel programming paradigms (PPPs) are discussed with their constructs and design approaches.

Highlights

  • The numerous computational concentrated tasks of the computer science like weather forecast, climate research, the exploration of oil and gas, molecular modelling, quantum mechanics, and physical simulations are performed by the supercomputers as well as mainframe computer

  • Concurrency and Parallelism: The terms concurrency and parallelism must be clear in our minds first

  • The structured parallel programming construct is introduced as a structured region

Read more

Summary

INTRODUCTION

The numerous computational concentrated tasks of the computer science like weather forecast, climate research, the exploration of oil and gas, molecular modelling, quantum mechanics, and physical simulations are performed by the supercomputers as well as mainframe computer. Due to the recent advances in the hardware technologies, we are leaving the von Neumann computation model and adopting the distributed computing models which have peer-to-peer (P2P), cluster, cloud, grid, and jungle computing models in it [1] All these models are used to achieve the parallelism and are high performance computing (HPC) models. It is a common practice to execute various programs concurrently by the computing machines It may have the architecture with multiprocessors (various CPUs) which share the common memory space as shown in the Fig. (a) or another architecture that may have multiprocessors with their independent memories or distributed memories as shown in the Fig. (b). It is the big challenge for the scientists to utilize these hardware technologies efficiently, effectively, and these processors may work cooperatively. The article contains more sections which are organized as follows: the related works will be available in the section 2

RELATED WORKS
PARALLEL PROGRAMMING CONSTRUCTS OR PRINCIPLES
Structured Construct – Structured Region
Thread-Based Constructs
Synchronization
Critical sections
Deadlock
Object-Oriented Constructs
Object Replication
Latency Hiding
Termination Detection
Concurrency
Data Distribution
Inter-process Communication
Computational Load Balancing
Variable Definitions
Parallel Compositions
3.10. Program Structures
3.11. Ease of Programming and Debugging
PARALLEL PROGRAMMING APPROACHES
Explicit Parallelism
Hybrid Parallelism
PARALLEL PROGRAMMING PARADIGMS
Fortress
OpenMP
OpenMPI
5.10. Erlang
5.14. Manticore Programming language
CONCLUSIONS
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.