Abstract

Recently, the parallel processing of physical simulations using GPU (Graphics Processing Unit) get scientists' and engineers' attention. The programming environment for numerical analysis (CUDA) is also available from the GPU vendors. In this paper, we focused on one of the popular numerical algorithms for linear system, the conjugate gradient (OG) method, and implemented it to the GPU environment. We described here the programming recipes and checked the effectiveness on the NVIDIA GeForce GTX280.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call