Abstract

Massively parallel architectures are proposed as a promising solution to speed up data-intensive applications and provide the required computational power. In particular, Single Instruction Multiple Data (SIMD) many-core architectures have been adopted for multimedia and signal processing applications with massive amounts of data parallelism where both performance and flexible programmability are important metrics. However, this class of processors has faced many challenges due to its increasing fabrication cost and design complexity. Moreover, the increasing gap between design productivity and chip complexity requires new design methods. Nowadays, the recent evolution of silicon integration technology, on the one hand, and the wide usage of reusable Intellectual Property (IP) cores and FPGAs (Field Programmable Gate Arrays), on the other hand, are attractive solutions to meet these challenges and reduce the time-to-market. The objective of this work is to study the performances of massively parallel SIMD on-chip architectures with current design methodologies based on recent integration technologies. Flexibility offered by these new design tools allows design space exploration to search for the most effective implementations. This work introduces an IP-based design methodology for easy building configurable and flexible massively parallel SIMD processing on FPGA platforms. The proposed approach allows implementing a generic parallel architecture based on IP assembly that can be tailored in order to better satisfy the requirements of highly-demanding applications. The experimental results show effectiveness of the design methodology as well as the performances of the implemented SoC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call