Abstract

MDMC2 is a parallel code for performing molecular dynamics simulations on multiply charged clusters. It is a valuable complement to MCMC2, a Monte Carlo program devoted to Monte Carlo simulations of multiply charged clusters in the NVT ensemble (Bonhommeau and Gaigeot, 2013). Both MCMC2 and MDMC2 codes employ a mesoscopic coarse-grained simplified representation of the clusters (or droplets): these clusters are composed of neutral and charged spherical particles/grains that may be polarisable. One grain can be either neutral or charged. The interaction potential is a sum of 2-body Lennard-Jones potentials (main cohesive contribution) and electrostatic terms (repulsive contribution), possibly supplemented by N-body polarisation interactions. There is no restriction imposed on the values of the particle charges and/or polarisabilities. An external field can also be applied to the whole system. The derivatives of the potential energy-surface are determined analytically which ensures an accurate integration of classical equations of motion by a velocity Verlet algorithm. Conservation rules, such as energy conservation or centre-of-mass linear momentum conservation, can be steadily checked during the simulation. The program also provides some statistical information on the run and configuration files that can be used for data post-treatment. MDMC2 is provided with a serial conjugate gradient program, called CGMC2, that uses the same analytical derivatives as MDMC2 and was found useful to probe the minima of the energy landscape explored during Monte Carlo or molecular dynamics simulations performed on multiply charged clusters. Program summaryProgram title: MDMC2Catalogue identifier: AERI_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERI_v1_0.htmlProgram obtainable from: CPC Program Library, Queen’s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 146001No. of bytes in distributed program, including test data, etc.: 2489341Distribution format: tar.gzProgramming language: Fortran 90 with MPI extensions for parallelisation.Computer: x86 and IBM platforms.Operating system:1.CentOS 5.6 Intel Xeon X5670 2.93 GHz, gfortran/ifort(version 13.1.0) + MPICH2.2.CentOS 5.3 Intel Xeon E5520 2.27 GHz, gfortran/g95/pgf90 + MPICH2.3.Red Hat Enterprise 5.3 Intel Xeon X5650 2.67 GHz, gfortran + IntelMPI.4.IBM Power 6 4.7 GHz, xlf + PESS (IBM parallel library).Has the code been vectorised or parallelised?: Yes, parallelised using MPI extensions. Number of CPUs used: up to 9999.RAM: (per CPU core): 5–10 MB.Classification: 3, 16.13, 23.Nature of problem:We provide a general parallel code to perform the dynamics of multiply charged clusters and a serial conjugate gradient code for locally minimising configurations obtained during the dynamics. Both of these programs are compatible with the input and output files of the MCMC2 code [1].Solution method:Parallel molecular dynamics simulations can be performed by the integration of classical equations of motion where all the derivatives are computed analytically whatever the details of the potential-energy surface. The parallelisation aims to distribute different trajectories on different CPU cores, which makes parallelisation efficiency optimal, with up to 9999 trajectories that could be run at the same time. A conjugate gradient program is also provided to investigate the local minima corresponding to the energy landscape explored during MD or MC simulations performed with MDMC2 and MCMC2, respectively.Restrictions:The current version of the code uses Lennard-Jones interactions, as the main cohesive interaction between spherical particles, and electrostatic interactions (charge–charge and polarisation terms). The simulations are performed in the NVE ensemble. There is no container which allows the user to study the fragmentation of the clusters (if any fragmentation occurs), which is our primary goal. Unlike MCMC2, that included a large choice of histograms for interpreting simulations (such as radial and angular histograms), MDMC2 does not include these features.Unusual features:The input and output configuration files are fully compatible with the files generated by MCMC2 which makes MDMC2 (+CGMC2) a useful companion of MCMC2 to model structural, thermodynamic and dynamic properties of multiply-charged clusters. All the derivatives, even those including polarisation, are computed analytically in order to prevent inaccuracies due to numerical derivatives. MDMC2 is provided with one random number generator from the LAPACK library.Running time:The running time depends on the number of molecular dynamics steps, cluster size, and type of interactions selected (e.g., polarisation turned on or off). For instance, a 12-trajectories MD simulation composed of 2×106 time steps (δt=104) performed for A100100+ clusters, without inclusion of polarisation, and running on 12 Intel Xeon E5520 2.27 GHz CPU cores lasts 16 min. The same kind of MD simulation performed on the same type of processors for A309309+ clusters lasts a bit less than 3 h. The physical memory used by the code also increases from about 44 MB to 74 MB for the whole job.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.