Multi-component polymer systems are important for the development of new materials because of their ability to phase-separate or self-assemble into nano-structures. The Single-Chain-in-Mean-Field (SCMF) algorithm in conjunction with a soft, coarse-grained polymer model is an established technique to investigate these soft-matter systems. Here we present an implementation of this method: SOft coarse grained Monte-Carlo Acceleration (SOMA). It is suitable to simulate large system sizes with up to billions of particles, yet versatile enough to study properties of different kinds of molecular architectures and interactions.We achieve efficiency of the simulations commissioning accelerators like GPUs on both workstations as well as supercomputers. The implementation remains flexible and maintainable because of the implementation in the scientific programming language C enhanced by OpenACC pragmas for the accelerators.We present implementation details and features of the program package, investigate the scalability of our implementation SOMA, and discuss two applications, which cover system sizes that are difficult to reach with other, common particle-based simulation methods. Program summaryProgram Title: SOMAProgram Files doi:http://dx.doi.org/10.17632/j3thz43k93.1Licensing provisions: GNU Lesser General Public License version 3Programming language: C99, OpenACC, OpenMP, MPI, pythonNature of problem: Efficient simulation of polymer materials, their phase-separation or self-assembly using a highly coarse-grained, soft, particle-based model [1]. The simulations help predicting self-assembled structures that, for example, find application in the fabrication of large-scale, dense arrays of nano-structures by Directed Self-Assembly (DSA).Solution method: Representation of soft, non-bonded interactions by quasi-instantaneous fields on a collocation grid using the Single-Chain-in-Mean-Field (SCMF) [2] algorithm, and sampling of configuration space using local random Monte-Carlo (MC) displacements and Smart Monte-Carlo (SMC). Parallelization using MPI and accelerators such as Graphics Processing Units (GPUs).Restrictions: The program has not been tested for more than 10 billion particles.Unusual Features: Efficient simulation on different hardware architectures and accelerators, including multi-core Central Processing Units (CPUs) and GPUs. Furthermore, it is possible to combine different architectures within a single simulation.[1] M. Müller, J. Stat. Phys. 145 (2011) 967[2] K. C. Daoulas, M. Müller, J. Chem. Phys. 125 (2006) 184904
Read full abstract