Abstract

A statistical quantization model is used to analyze of the effects of quantization in digital implementation of high-order function neural network. From the theory we analyse the performance degradation and fault tolerance of the neural network caused by the number of quantization bits and by changing the order. We try to predict the error in the high-order function neural network (HOFNN) given the properties of the network and the number of quantization bits. Experimental results show the error rate is inversely proportional to quantized bits M for HRFNN. The recognition performance of the backpropagation (BP) network and the HRFNN are almost the same for different quantization bits. The network's performance degradation gets worse when the number of bits is lower than 4-bit quantization. The network's performance degradation gets worse when the number of bits is lower than 4-bit quantization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call