We have implemented a Monte Carlo code in reduced units providing structural and thermodynamical properties of multiply-charged Lennard-Jones droplets ANn+, composed of N individual particles among which n are charged particles, each one carrying a charge qi (qi can be positive or negative). The cluster has a total net charge Q=∑i=1,nqi (Q>0 or Q<0). The interactions between particles are modelled by a sum of pairwise Lennard-Jones potentials and electrostatic terms, including polarisation. The cluster statistical properties can be obtained from (i) parallel Monte Carlo simulations whose replicas are run at different temperatures, from configurations with same number of charged particles n and same individual charges qi (Parallel Tempering Monte Carlo), or (ii) parallel Monte Carlo simulations whose replicas are run at the same temperature but from configurations with different qi or n (Parallel Charging Monte Carlo). The code provides statistical data (evaporation rates, acceptance/rejection rates, etc.), energetic data (mean energies, heat capacities, etc.), and structural data (radial and angular distributions), for comprehensive analyses. A complete manual of the code is provided. Program summaryProgram title: MCMC2Catalogue identifier: AENZ_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENZ_v1_0.htmlProgram obtainable from: CPC Program Library, Queen’s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 146790No. of bytes in distributed program, including test data, etc.: 1501715Distribution format: tar.gzProgramming language: Fortran 90 with MPI extensions for parallelisation.Computer: x86 and IBM platforms.Operating system:1.CentOS 5.6 Intel Xeon X5670 2.93 GHz, gfortran +MPICH2.2.CentOS 5.3 Intel Xeon E5520 2.27 GHz, gfortran/g95/pgf90 +MPICH2.3.Red Hat Enterprise 5.3 Intel Xeon X5650 2.67 GHz, gfortran +IntelMPI.4.IBM Power 6 4.7 GHz, xlf +PESS (IBM parallel library).Has the code been vectorised or parallelised?: Yes, parallelised using MPI extensions. Number of CPUs used: 2 to ∼40 but the code enables the use of 999 CPUs if desired.RAM: 10–20 MBClassification: 23.Nature of problem:We provide a general parallel code to investigate structural and thermodynamic properties of multiply charged clusters.Solution method:Parallel Monte Carlo methods are implemented for the exploration of the configuration space of multiply charged clusters. Two parallel Monte Carlo methods were found appropriate to achieve such a goal: the Parallel Tempering method, where replicas of the same cluster at different temperatures are distributed among different CPUs, and Parallel Charging where replicas (at the same temperature) having different particle charges or numbers of charged particles are distributed on different CPUs.Restrictions:The current version of the code uses Lennard-Jones interactions, as the main cohesive interaction between spherical particles, and electrostatic interactions (charge–charge, charge-induced dipole, induced dipole–induced dipole, polarisation). The Monte Carlo simulations can only be performed in the NVT ensemble in the present code.Unusual features:The Parallel Charging methods, based on the same philosophy as Parallel Tempering but with particle charges and number of charged particles as parameters instead of temperature, is an interesting new approach to explore energy landscapes. Splitting of the simulations is allowed and averages are accordingly updated.Running time:The running time depends on the number of Monte Carlo steps, cluster size, and type of interactions selected (e.g., polarisation turned on or off, and method used for calculating the induced dipoles). Typically a complete simulation can last from a few tens of minutes or a few hours for small clusters (N≤100, not including polarisation interactions), to one week for large clusters (N≥1000 not including polarisation interactions), and several weeks for large clusters (N≥1000) when including polarisation interactions. A restart procedure has been implemented that enables a splitting of the simulation accumulation phase.