Abstract
Sparse matrix by vector multiplication (SMV) is a key operation of many scientific and engineering applications. Field Programmable Gate Arrays (FPGAs) have the potential to significantly improve the performance of computationally intensive applications which are dominated by SMV. A shortcoming of most existing FPGA SMV implementations is that they use on-chip Block RAM or external SRAM to store the matrix, which severely limits the problem size. Real applications, such as Finite Element Analysis (FEA), require large memories. Realistically this capacity can only be provided by commodity DRAM. In this paper we address the problem of SMV for large matrices using commodity memory. We implement SPAR, a special purpose architecture that was previously proposed for large SMV computations in a VLSI co-processor using cheap external memory. We present an empirical evaluation of the SPAR architecture for use on FPGAs and highlight challenges that arise when tackling realistic FEA problems.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.