Massively parallel computers now permit the molecular dynamics (MD) simulation of multi-million atom systems on time scales up to the microsecond. However, the subsequent analysis of the resulting simulation trajectories has now become a high performance computing problem in itself. Here, we present software for calculating X-ray and neutron scattering intensities from MD simulation data that scales well on massively parallel supercomputers. The calculation and data staging schemes used maximize the degree of parallelism and minimize the IO bandwidth requirements. The strong scaling tested on the Jaguar Petaflop Cray XT5 at Oak Ridge National Laboratory exhibits virtually linear scaling up to 7000 cores for most benchmark systems. Since both MPI and thread parallelism is supported, the software is flexible enough to cover scaling demands for different types of scattering calculations. The result is a high performance tool capable of unifying large-scale supercomputing and a wide variety of neutron/synchrotron technology. Program summaryProgram title: SassenaCatalogue identifier: AELW_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELW_v1_0.htmlProgram obtainable from: CPC Program Library, Queenʼs University, Belfast, N. IrelandLicensing provisions: GNU General Public License, version 3No. of lines in distributed program, including test data, etc.: 1 003 742No. of bytes in distributed program, including test data, etc.: 798Distribution format: tar.gzProgramming language: C++, OpenMPIComputer: Distributed Memory, Cluster of Computers with high performance network, SupercomputerOperating system: UNIX, LINUX, OSXHas the code been vectorized or parallelized?: Yes, the code has been parallelized using MPI directives. Tested with up to 7000 processorsRAM: Up to 1 Gbytes/coreClassification: 6.5, 8External routines: Boost Library, FFTW3, CMAKE, GNU C++ Compiler, OpenMPI, LibXML, LAPACKNature of problem: Recent developments in supercomputing allow molecular dynamics simulations to generate large trajectories spanning millions of frames and thousands of atoms. The structural and dynamical analysis of these trajectories requires analysis algorithms which use parallel computation and IO schemes to solve the computational task in a practical amount of time. The particular computational and IO requirements very much depend on the particular analysis algorithm. In scattering calculations a very frequent pattern is that the trajectory data is used multiple times to compute different projections and aggregates this into a single scattering function. Thus, for good performance the trajectory data has to be kept in memory and the parallel computer has to have enough RAM to store a volatile version of the whole trajectory. In order to achieve high performance and good scalability the mapping of the physical equations to a parallel computer needs to consider data locality and reduce the amount of the inter-node communication.Solution method: The physical equations for scattering calculations were analyzed and two major calculation schemes were developed to support any type of scattering calculation (all/self). Certain hardware aspects were taken into account, e.g. high performance computing clusters and supercomputers usually feature a 2 tier network system, with Ethernet providing the file storage and infiniband the inter-node communication via MPI calls. The time spent loading the trajectory data into memory is minimized by letting each core only read the trajectory data it requires. The performance of inter-node communication is maximized by exclusively utilizing the appropriate MPI calls to exchange the necessary data, resulting in an excellent scalability. The partitioning scheme developed to map the calculation onto a parallel computer covers a wide variety of use cases without negatively effecting the achieved performance. This is done through a 2D partitioning scheme where independent scattering vectors are assigned to independent parallel partitions and all communication is local to the partition.Additional comments: !!!!! The distribution file for this program is approximately 36 Mbytes and therefore is not delivered directly when download or E-mail is requested. Instead an html file giving details of how the program can be obtained is sent. !!!!!Running time: Usual runtime spans from 1 min on 20 nodes to 2 h on 2000 nodes. That is 0.5–4000 CPU hours per execution.
Read full abstract