Abstract

A type of optimized neural networks with limited precision weights (LPWNN) is presented in this paper. Such neural networks, which require less memory for storing the weights and less expensive floating point units in order to perform the computations involved, are better suited for embedded systems implementation than the real weight ones. Based on analyzing the learning capability of LPWNN, Quantize Back-propagation Step-by-Step (QBPSS) algorithm is proposed for such neural networks to overcome the effects of limited precision. Methods of designing and training LPNN are represented, including the quantization of non-linear activation function and the selection of learning rate, network architecture and weights precision. The optimized LPWNN performance has been evaluated by comparing to conventional neural networks with double-precision floating-point weights on road recognition of image for intelligent vehicle in ARM 9 embedded systems, and the results show the optimized LPWNN has 7 times faster than the conventional ones.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.