Abstract

<p>Matrices are one of the most used data representation form from real-world problems. Lot of matrix was formed very big but sparse, hence information inside the matrix is relatively small compared to its size. This caused into heavy computational resources needed to process those matrices within short time. One of the solutions to do an efficient process to the sparse matrix is to form it into a specialized form of sparse matrix, such as Sliced Coordinate List (SCOO). SCOO format for sparse matrix has been developed and combined within an implementation using Compute Unified Device Architecture (CUDA). In this research, performance of SCOO implementation using GPU – CUDA will be compared to the other sparse matrix format named Coordinate List (COO) based on its memory usage and execution time. Results obtained from this research show that although SCOO implementation for sparse matrix use memory 1.000529 larger than COO format, its serial performance is 3.18 faster than serial COO, besides that, if SCOO implementation is conducted parallel using GPU – CUDA then its performance can be achieved around 123.8 faster than parallel COO or 77 times faster than parallel COO using one of the available library for CUDA, named CUSP.</p>

Highlights

  • Many problems in the scientific area and in the real world application are modeled as sparse matrix

  • We evaluated the performance and memory usage of Sparse Matrix-Vector Multiplication (SpMV) using Coordinate List (COO) and Sliced Coordinate List (SCOO) format

  • The memory consumption of COO format can be reduced until 99.54242%

Read more

Summary

INTRODUCTION

N daily life, it is often to find problems which can be formulated into mathematics form for solving them. Matrix can be used to represent information from real world related to the problem that has been formulated into mathematics form. This representation method is commonly used in several cases which use graph as its data structure [7]. A lot of matrix which represents real world problems are sparse matrices. It is very large in size but information stored inside is relatively small, which cause bigger computational resource needs to do some calculation into those matrices [8]. CUSP is an open-source like a generic parallel algorithms for sparse linear algebra and graphs computation in GPU with CUDA architecture.

Sparse Matrix
Vector
Sparse Matrix-Vector Multiplication
SpMV Dataset
Convert to Sparse Matrix
SpMV Operation
Performance Evaluation
TESTING RESULTS AND ANALYSIS
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call