Abstract

This paper deals with effect of digital noise to numerical stability of neural networks. Digital noise arises from the inexactness of floating point values operations. Accumulated errors finally lead to the loss of significance. Experiments show that more redundant networks have higher noise influence. This effect is tested in both model and real world samples. As a result, one should exclude all the networks results from the beginning of fluctuations. Results of experiments allow us to hypothesize that minimal values of loss function preserving significance were achieved for the networks of size close to the complexity of the dataset. So, it is a reason to choose sizes of network layers in accordance with complexity of particular datasets and not universally for an architecture and general problem statement without relation to data. In the case of fine tuning this suggests that pruning of network layers can improve result accuracy and reliability of prediction due to decrease of numerical noise influence. Results of this article are based on analysis of numerical experiments with train of more than 50000 neural networks for thousands epochs for each network. Almost all the networks begin to fluctuate.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.