Abstract

Matrix multiplication is an essential mathematical calculation in a wide range applications of signal processing, computer graphics and intelligent robots. The intelligent and autonomous robots involves various navigation algorithms (e.g. Extended Kalman Filter (EKF), reinforcement learning, A* and artificial potential field, etc.) [1] –[4] and deep neural network (DNN) algorithms (e.g. Darknet in YOLOv3), which all contain intensive matrix multiplications with different sizes and shapes. The emerging Intelligent and Autonomous Mobile Robots (I-AMRs) have put forward to a higher demand to efficient hardware acceleration of a comprehensive range of matrix multiplications as depicted in Fig. 1. Recent works have focused on the hardware acceleration of matrix multiplications optimized for a specified navigation or DNN algorithm [3] –[5], which cannot achieve high hardware utilization, high area and energy efficiency for the various matrix multiplications in I-AMRs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call