Abstract
<p>Within the ESiWACE2 project we parallelized and optimized OBLIMAP. OBLIMAP is a climate model - ice sheet model coupler that can be used for offline and online coupling with embeddable mapping routines. In order to anticipate future demand concerning higher resolution and/or adaptive mesh applications, a parallel implementation of OBLIMAP's fortran code with MPI has been developed. The data intense nature of this mapping task, required a shared memory approach across the processors per compute node in order to prevent that the node memory is the limiting bottleneck. Besides, the current parallel implementation allows multi node scaling and includes parallel netcdf IO in addition with loop optimizations. Results show that the new parallel implementation offers better performance and scales well. On a single node, the shared memory approach allows now to use all the available cores, up to 128 cores in our experiments on Antarctica 20x20km test case where the original code was limited to 64 cores on this high-end node and it was even limited to 8 cores on moderate platforms. The multi node parallelization offers on Greenland 2x2km test case a speedup of 4.4x on 4 high-end compute nodes equipped with 128 cores each compared to the original code which was able to run only on 1 node. This paves the way to the establishment of OBLIMAP as an candidate ice sheet coupling library candidate for large-scale, high-resolution climate modeling.</p>
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.