Abstract

AbstractMany world leading high-end computing (HEC) facilities are now offering over 100 Teraflops/s of performance and several initiatives have begun to look forward to Petascale computing5 (1015 flop/s). Los Alamos National Laboratory and Oak Ridge National Laboratory (ORNL) already have Petascale systems, which are leading the current (Nov 2008) TOP500 list [1]. Computing at the Petascale raises a number of significant challenges for parallel computational fluid dynamics codes. Most significantly, further improvements to the performance of individual processors will be limited and therefore Petascale systems are likely to contain 100,000+ processors. Thus a critical aspect for utilising high Terascale and Petascale resources is the scalability of the underlying numerical methods, both with execution time with the number of processors and scaling of time with problem size. In this paper we analyse the performance of several CFD codes for a range of datasets on some of the latest high performance computing architectures. This includes Direct Numerical Simulations (DNS) via the SBLI [2] and SENGA2 [3] codes, and Large Eddy Simulations (LES) using both STREAMS LES [4] and the general purpose open source CFD code Code Saturne [5].Key wordsPetascaleHigh-End ComputingParallel PerformanceDirect Numerical SimulationsLarge Eddy Simulations

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.