Abstract

The development of a parallel software for simulating large-scale time-dependent 3D problems involving the flow of multimode viscoelastic fluids is presented. The computing environment relies on PETSc [27] (Portable, Extensible Toolkit for Scientific Computation) components integrated with a finite element solver. A DEVSS-G/SUPG formulation together a log-representation of the conformation tensor are used to stabilize the code. An operator-splitting time-integration scheme is implemented whereby the solution of the continuity and momentum balance equations are decoupled. The large linear system arising from the continuity and momentum discretization is solved by the multifrontal, massively parallel solver (MUMPS) [37]. The time-integration scheme applied to the constitutive equation gives rise to six, decoupled linear systems (one for each stress component) solved by MUMPS as well. Three test cases are selected to show the performances and the limitations of the parallel code. In the first example, the flow of a Giesekus fluid in a pressure-driven square-shaped channel is examined. In the second test, the unsteady drag on a sphere is computed and, in the third example, a solid particle in a sheared viscoelastic fluid is considered, thus involving a moving boundary. The parallel software gives high performances for problems such that the finite element mesh is independent of time. In this case, the coefficients of the Stokes-like system are constant in time and the matrix factorization is computed at the first time step only and kept during the whole simulation, drastically reducing the total computational time. On the other hand, whether the factorization needs to be computed at each time step, the solution time increases by more than one order of magnitude, suggesting to seek for alternative iterative solvers. Finally, for all the test cases, scalability of parallel software is evaluated using the speed-up normalized with respect to the software execution time on a number of processors greater than 1. This was necessary due to the high memory requirements. Experimental results validate the performance gain of the parallel code as both the problem size and the processor number increase.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.