Abstract

In this chapter, the authors discuss issues surrounding High Performance Computing (HPC)-driven science on the example of Peta science Monte Carlo experiments conducted at the Brookhaven National Laboratory (BNL), one of the US Department of Energy (DOE) High Energy and Nuclear Physics (HENP) research sites. BNL, hosting the only remaining US-based HENP experiments and apparatus, seem appropriate to study the nature of the High-Throughput Computing (HTC) hungry experiments and short historical development of the HPC technology used in such experiments. The development of parallel processors, multiprocessor systems, custom clusters, supercomputers, networked super systems, and hierarchical parallelisms are presented in an evolutionary manner. Coarse grained, rigid Grid system parallelism is contrasted by cloud computing, which is classified within this chapter as flexible and fine grained soft system parallelism. In the process of evaluating various high performance computing options, a clear distinction between high availability-bound enterprise and high scalability-bound scientific computing is made. This distinction is used to further differentiate cloud from the pre-cloud computing technologies and fit cloud computing better into the scientific HPC.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.