Abstract

Deep learning accelerators are a specialized sort of hardware architecture designed to enhance the computational efficiency of computers engaged in deep neural networks (DNNs) training. The implementation of DNNs in embedded vision applications might potentially be facilitated by the integration of energy-effective accelerators of deep learning into sensors. The lack of recognition for their significant impact on accuracy is a notable oversight. In previous iterations of deep learning accelerators integrated inside sensors, a common approach was bypassing the image signal processor (ISP). This deviation from the traditional vision pipelines had a detrimental impact on the performance of machine learning models trained on data that had undergone post-ISP processing. In this study, we establish a set of energy-efficient techniques that allow ISP to maximize their advantages while also limiting the covariate shift between the target dataset (RAW images) and the training dataset (ISP-analyzed images). This approach enables the practical use of in-sensor accelerators. To clarify, our results do not minimize the relevance of in-sensor accelerators. Instead, we highlight deficiencies in the methodology used in prior research and propose methodologies that empower in-sensor accelerators to fully exploit their capabilities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call