Abstract
The parallel scaling (parallel performance up to 48 cores) of NAMD package has been investigated by estimation of the sensitivity of interconnection on speedup and benchmark results—testing the parallel performance of Myrinet, Infiniband and Gigabit Ethernet networks. The system of ApoA1 of 92 K atoms, as well as 1000 K, 330 K, 210 K, 110 K, 54 K, 27 K and 16 K has been used as testing systems. The Armenian grid infrastructure (ArmGrid) has been used as a main platform for series of benchmarks. According to the results, due to the high performance of Myrinet and Infiniband networks, the ArmCluster system and the cluster located in the Yerevan State University show reasonable values, meanwhile the scaling of clusters with various types of Gigabit Ethernet interconnections breaks down when interconnection is activated. However, the clusters equipped by Gigabit Ethernet network are sensitive to change of system, particularly for 1000 K systems no breakdown in scaling is observed. The infiniband supports in comparison with Myrinet, make it possible to receive almost ideally results regardless of system size. In addition, a benchmarking formula is suggested, which provides the computational throughput depending on the number of processors. These results should be important, for instance, to choose most appropriate amount of processors for studied system.
Highlights
It is fact that computational Grids [1,2,3,4,5] consists of various computational layers
Due to the high performance of Myrinet and Infiniband networks, the ArmCluster system and the cluster located in the Yerevan State University show reasonable values, the scaling of clusters with various types of Gigabit Ethernet interconnections breaks down when interconnection is activated
In order to ensure that Armenia would not stay behind in this important area, an appropriate national Grid infrastructure has been deployed on basis of available distributed computational resources
Summary
It is fact that computational Grids [1,2,3,4,5] consists of various computational layers. Used packages are free available NAMD and GROMACS with open source codes, which are aimed at the high performance simulation with parallel support. The parallel scaling of GROMACS (version 3.3) molecular dynamics code has been studied by Kutzner and coworkers [27] They have claimed the high single-node performance of GROMACS, on Ethernet switched (HP ProCurve 2848 switch) clusters, they find the breakdown in scaling, when more than two nodes were involved. The scaling of NAMD to ~8000 processors of Blue Gene/L system has been presented in [28] They achieved 1.2 TF of peak performance for cutoff simulation and ~0.99TF with PME method. In order to better understand the parallel behavior of NAMD package, a series of benchmarks have performed within the ArmGrid infrastructure by using different types of interconnections and processor features. The results has practical meaning to the end users to effectively port and use computational resources of the Grid sites similar to the investigated clusters
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have