Abstract
Code optimization has a rich history that dates back over half a century. Over the years, it has contributed deep innovations to address challenges posed by new computer system and programming language features. Examples of the former include optimizations for improved register utilization, instruction-level parallelism, vector parallelism, multiprocessor parallelism and memory hierarchy utilization. Examples of the latter include optimizations for procedural, object-oriented, functional and domain-specific languages as well as dynamic optimization for managed runtimes. These optimizations have contributed significantly to programmer productivity by reducing the effort that programmers need to spend on hand-implementing code optimizations and by enabling code to be more portable, especially as programming models and computer architectures change. While compiler frameworks are often able to incorporate new code optimizations in an evolutionary manner, there have been notable periods in the history of compilers when more revolutionary changes were necessary. Examples of such paradigm shifts in the history of compilers include interprocedural whole program analysis, coloring-based register allocation, static single assignment form, array dependence analysis, pointer alias analysis, loop transformations, adaptive profile-directed optimizations, and dynamic compilation. The revolutionary nature of these shifts is evidenced by the fact that production-strength optimization frameworks (especially those in industry) had to be rewritten from scratch or significantly modified to support the new capabilities. In this talk, we claim that the current multicore trend in the computer industry is forcing a new paradigm shift in compilers to address the challenge of code optimization of parallel programs, regardless of whether the parallelism is implicit or explicit in the programming model. All computers --- embedded, mainstream, and high-end --- are now being built from multicore processors with little or no increase in clock speed per core. This trend poses multiple challenges for compilers for future systems as the number of cores per socket continues to grow, and the cores become more heterogeneous. In addition, compilers have to keep pace with emerging parallel programming models embodied in a proliferation of new libraries and new languages.To substantiate our claim, we examine the historical foundations of code optimization including intermediate representations (IR's), abstract execution models, legality and cost analyses of IR transformations and show that they are all deeply entrenched in the von Neumann model of sequential computing. We discuss ongoing evolutionary efforts to support optimization of parallel programs in the context of existing compiler frameworks, and their inherent limitations for the long term. We then outline what a revolutionary approach will entail, and identify where its underlying paradigm shifts are likely to lie. We provide examples of past research that are likely to influence future directions in code optimization of parallel programs such as program dependence graphs, partitioning and scheduling of lightweight parallelism, synchronization optimizations, communication optimizations, transactional memory optimizations, code generation for heterogeneous accelerators, impact of memory models on code optimization, and general forms of data and computation alignment. Finally, we briefly describe the approach to code optimization of parallel programs being taken in the Habanero Multicore Software Research project at Rice University.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.