Abstract

Embedded sensing devices that are battery-operated and resource-limited are increasingly utilizing recurrent neural networks (RNNs) to attain real-time perception of the complex environment. Nevertheless, the majority of RNNs typically handle substantial amounts of time-series data, resulting in computational intensity and high energy consumption. Hence, the development of energy-efficient RNN models is a crucial focus. This paper introduces IS-RNN, an energy-efficient RNN model featuring importance-based sparsification, aimed at enhancing the energy efficiency of RNN-based in-sensor inference. The principal motivation for this research arises from the recognition that time series data obtained from sensor sensing often exhibit inherent redundancy. Leveraging input sequence sparsification allows for the elimination of invalid input accesses and computations, thus optimizing efficiency. As a result, optimizing the sparsity of the input sequence through adaptive control allows us to improve inference performance while adhering to energy constraints. In particular, we build an importance score analysis method to quantify the contribution of each input toward the RNN inference performance. Building upon the proposed quantitative importance score analysis, we introduce a dynamic efficient RNN architecture, which enables the model to make decisions based on important data to obtain the highest accuracy under energy constraints. Moreover, we introduce an adaptive RNN inference method to enhance prediction accuracy under energy constraints by making a trade-off between computational cost and model performance. We evaluated IS-RNN on five datasets and the results show that it achieves a higher inference accuracy with the same amount of data processed compared to the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call