Abstract

When the first specification of the FORTRAN language was released in 1956, the goal was to provide an "automatic programming system" that would enhance the economy of programming by replacing assembly language with a notation closer to the domain of scientific programming. A key issue in this context, explicitly recognized by the authors of the language, was the requirement to produce efficient object programs that could compete with their hand‐coded counterparts. More than 50 years later, a similar situation exists with respect to finding the right programming paradigm for high performance computing systems. FORTRAN, as the traditional language for scientific programming, has played a major role in the quest for high‐productivity programming languages that satisfy very strict performance constraints. This paper focuses on high‐level support for locality awareness, one of the most important requirements in this context. The discussion centers on the High Performance Fortran (HPF) family of languages, and their influence on current language developments for peta‐scale computing. HPF is a data‐parallel language that was designed to provide the user with a high‐level interface for programming scientific applications, while delegating to the compiler the task of generating an explicitly parallel message‐passing program. We outline developments that led to HPF, explain its major features, identify a set of weaknesses, and discuss subsequent languages that address these problems. The final part of the paper deals with Chapel, a modern object‐oriented language developed in the High Productivity Computing Systems (HPCS) program sponsored by DARPA. A salient property of Chapel is its general framework for the support of user‐defined distributions, which is related in many ways to ideas first described in Vienna Fortran. This framework is general enough to allow a concise specification of sparse data distributions. The paper concludes with an outlook to future research in this area.

Highlights

  • In 1954, more than fifty years ago, John Backus and his group at IBM Corporation began their work on “automatic programming systems” that resulted in the first specification of a high-level algorithmic language in 1956, the FORmula TRANslating system, FORTRAN [22]

  • In view of the surprising similarities to the current situation in programming for high performance computing (HPC) systems, which is the main topic of this paper, it is useful to take a look at the motivation and goals of the original FORTRAN project, which are discussed in detail in John Backus’ history paper [4]

  • A first major community effort resulted in the Message Passing Interface (MPI) [25,42,53], which defined a standardized library for message passing, providing the programmer with a means for explicitly managing and synchronizing communication in a processor-centric model for programming and execution

Read more

Summary

Introduction

In 1954, more than fifty years ago, John Backus and his group at IBM Corporation began their work on “automatic programming systems” that resulted in the first specification of a high-level algorithmic language in 1956, the FORmula TRANslating system, FORTRAN [22]. A first major community effort resulted in the Message Passing Interface (MPI) [25,42,53], which defined a standardized library for message passing, providing the programmer with a means for explicitly managing and synchronizing communication in a processor-centric model for programming and execution While this approach allows virtually full control of communication – at a level of abstraction that can be compared to that of assembly language – it is commonly understood that it results in complex, brittle, and error-prone programs, due to the way in which algorithms and communication are inextricably interwoven. Data distributions provide an abstract specification of the partitioning of large-scale data collections across units of uniform memory access, supporting coarse-grain parallel computation and locality of access at a high level of abstraction The discussion of such approaches – almost all of which were originally based on FORTRAN – is the focus of this paper.

Basic concepts
Domains
Index mappings
Alignment
One-dimensional distribution classes
Block distributions
Distributions for multi-dimensional index sets
HPF predecessors
High Performance Fortran
Explicit management of communication schedules
The loop is restructured in such a way that:
Chapel
User-defined specification of distributions in Chapel
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.