Abstract

The state of modern computer systems has evolved to allow easy access to multiprocessor systems by supporting multiple processors on a single physical package. As the multiprocessor hardware evolves, new ways of programming it are also developed. Some inventions may merely be adopting and standardizing the older paradigms. One such evolving standard for programming shared-memory parallel computers is the OpenMP API. The Standard Performance Evaluation Corporation (SPEC) has created a suite of parallel programs called SPEC OMP to compare and evaluate modern shared-memory multiprocessor systems using the OpenMP standard. We have studied these benchmarks in detail to understand their performance on a modern architecture. In this paper, we present detailed measurements of the benchmarks. We organize, summarize, and display our measurements using a Quantitative Model. We present a detailed discussion and derivation of the model. Also, we discuss the important loops in the SPEC OMPM2001 benchmarks and the reasons for less than ideal speedup on our platform.

Highlights

  • With the breakthroughs in standard off-the-shelf microprocessor and memory technologies and their use in building cost effective Shared-memory Multiprocessor (SMP) systems, SMP systems have gained prominence in the market place

  • We discovered that the Fortran compiler makes stores to the THREADPRIVATE variables “volatile.” Since the volatile variables must be loaded from the memory each time they are needed, they cannot be allocated in registers

  • Our goal was to study a set of modern scientific shared-address-space (SAS) parallel programs

Read more

Summary

Introduction

With the breakthroughs in standard off-the-shelf microprocessor and memory technologies and their use in building cost effective Shared-memory Multiprocessor (SMP) systems, SMP systems have gained prominence in the market place. As their popularity grows, more sophisticated, yet flexible development and runtime environments are called for to facilitate rapid and efficient development of parallel applications. While each one has its benefits, for small to medium range SMPs, directive-based programming and POSIX thread programming have gained prominence. The OpenMP API [6] (Application Programming Interface) has fulfilled the need by providing a flexible, scalable, and fairly comprehensive set of compiler directives, library routines, and environment variables to incrementally write parallel programs. OpenMP is still evolving to better accommodate the needs of parallel programmers

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call