Abstract

ABSTRACTWhile most software is originally designed for serial or parallel execution on CPU, and porting to GPU comes later in its development, GPUSPH was designed from the ground up to run on GPUs using CUDA. Making it accessible to a wider audience by introducing support for other computational hardware, and in particular CPUs, poses challenges that are complementary to the ones normally faced when porting CPU code to GPU. We present the approach we have adopted to support CPUs as computational devices in GPUSPH with minimal code changes and low developer effort. Detailed benchmarks illustrating the performance of the implementation and its scalability across multiple cores in both single‐socket and NUMA configurations show good strong and weak scaling.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.