Abstract

It has become generally accepted that continued improvements in high-performance scientific computation will be achieved only through the ‘exploitation of parallelism’. Despite the nebulous nature of this expression, enthusiasm for the potential of parallel computing has led to calls for improvements in computational performance of more than a thousand-fold in the next few years, or for what is sometimes referred to as a Teraflop (one trillion floating-point operations per second) Computer. Such a system is envisioned as a general-purpose tool for accelerating progress in such widely varied applications as astronomy, biochemistry, circuit analysis, computational fluid dynamics, global economic modeling, high energy physics, materials science, structural analysis, and weather prediction. Although parallel architectures appear to offer the greatest promise for significant improvements in overall computational performance, it is not yet clear whether a general-purpose parallel architecture can realize the large increases solicited by the scientific community. This note will take a practical look at the prospect for general-purpose parallel computation and will consider some of the potential limitations by using a simple parametric model of computational performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.