Inside fluidized bed reactors, gas–solid flows are very complex: multi-scale, coupled, reactive, turbulent and unsteady. Accounting for them in an Euler-nfluid framework induces significantly expensive numerical simulations at academic scales and even more at industrial scales. 3D numerical simulations of gas–particle fluidized beds at industrial scales are limited by the High Performances Computing (HPC) capabilities of Computational Fluid Dynamics (CFD) software and by available computational power. In recent years, pre-Exascale supercomputers came into operation with better energy efficiency and continuously increasing computational resources.The present article is a direct continuation of previous work, Neau et al. (2020) which demonstrated the feasibility of a massively parallel simulation of an industrial-scale polydispersed fluidized-bed reactor with a mesh of 1 billion cells. Since then, we tried to push simulations of these systems to their limits by performing large-scale computations on even more recent and powerful supercomputers, once again using up to the entirety of these supercomputers (up to 286,000 cores). We used the same fluidized bed reactor but with more refined unstructured meshes: 8 and 64 billion cells.This article focuses on efficiency and performances of neptune_cfd code (based on Euler-nfluid approach) measured on several supercomputers with meshes of 1, 8 and 64 billion cells. It presents sensitivity studies conducted to improve HPC at these very large scales.On the basis of these highly-refined simulations of industrial scale systems using pre-Exascale supercomputers with neptune_cfd, we defined the upper limits of simulations we can manage efficiently in terms of mesh size, count of MPI processes and of simulation time. One billion cells computations are the most refined computation for production. Eight billion cells computations perform well up to 60,000 cores from a HPC point of view with an efficiency >85% but are still very expensive. The size of restart and mesh files is very large, post-processing is complicated and data management becomes near-impossible. 64 billion cells computations go beyond all limits: solver, supercomputer, MPI, file size, post-processing, data management. For these reasons, we barely managed to execute more than a few iterations.Over the last 30 years, neptune_cfd HPC capabilities improved exponentially by tracking hardware evolution and by implementing state-of-the-art techniques for parallel and distributed computing. However, our last findings show that currently implemented MPI/Multigrid approaches are not sufficient to fully benefit from pre-Exascale system. This work allows us to identify current bottlenecks in neptune_cfd and to formulate guidelines for an upcoming Exascale-ready version of this code that will hopefully be able to manage even the most complex industrial-scale gas–particle systems.
Read full abstract