Abstract
Since AlphaGo beat the world Go champion in 2016, which attracted wide attention, the neural network has become more and more popular in recent years, and people’s research on it has gradually improved and been applied in different fields. Today, artificial intelligence and machine learning have become an essential part of modern society and intelligent systems. We do image recognition, speech recognition, and visual learning, closely related to machine learning. However, in machine learning, unsatisfactory training results or even training failure is always encountered. Therefore, in machine learning, it is imperative to improve the accuracy of neural network training results. In this paper, speech recognition, image processing, MNIST, and other classical neural network models will be used to set the training parameters of the neural network better and improve the accuracy of training through voting, quantization, restart, and other methods. The part of research is aiming to find the relationship between restart numbers on the training process and the total extent of learning improvement. At the same time, several algorithms on utilizing these restart numbers are to be compared and selected. Finally, the conclusion is drawn that the more restart made in the training process with a convolutional neural network, the less profit on accuracy improvement we gain from restarting the process.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.