Abstract

In this paper, we provide our efforts to implement machine learning modeling on commodity hardware such as general purpose graphical processing unit (GPU) and multiple GPU's connected with message passing interface (MPI). We consider risk models that involve a large number of iterations to come up with a probability of defaults for any credit account. This is computed based on the Markov Chain analysis. We discuss data structures and efficient implementation of machine learning models on the GPU platform. Idea is to leverage the power of fast GPU RAM and thousands of GPU core for fasten the execution process and reduce overall time. When we increase the number of GPU in our experiment, it also increases the programming complexity and increase the number of I/O which leads to increase overall turnaround time. We benchmarked the scalability and performance of our implementation with respect to size of the data. Performing model computations on huge amount o.f data is a compute intensive and costly task. We purpose four combinations of CPU, GPU and MPI for machine learning modeling. Experiment on real data show that to training machine leaning model on single GPU outperform as compare to CPu, Multiple GPU and GPU connected with MPI.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call