Abstract

AbstractLattice Boltzmann method (LBM) is adopted to compute two and three-dimensional lid driven cavity flows to examine the influence of memory management on the computational performance using Graphics Processing Unit (GPU). Both single and multi-relaxation time LBM are adopted. The computations are conducted on nVIDIA GeForce Titan, Tesla C2050 and GeForce GTX 560Ti. The performance using global memory deteriorates greatly when multi relaxation time (MRT) LBM is used, which is due to the scheme requesting more information from the global memory than its single relaxation time (SRT) LBM counterpart. On the other hand, adopting on chip memory the difference using MRT and SRT is not significant. Also, performance of LBM streaming procedure using offset reading surpasses offset writing ranging from 50% to 100% and this applies to both SRT and MRT LBM. Finally, comparisons using different GPU platforms indicate that Titan as expected outperforms other devices, and attains 227 and 193 speedup over its Intel Core i7-990 CPU counterpart and four times faster than GTX 560Ti and Tesla C2050 for three dimensional cavity flow simulations respectively with single and double precisions.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.