Abstract

Notably, all neural network models are trained by using gradient descent, and by far, the most successful approach for machine learning is to use gradient descent. However, this is a greedy algorithm and hits some of the biggest open problems in the neural networks. By using gradient descent, it is not guaranteed that a better solution cannot be found. Here, this article has presented an empirical study of the performance of two hidden layers’ neural networks. It gives practical methods to improve the accuracy of neural networks: cooperation method of neural network. In this study, our group applied the data augmentation method by adding noise into the training data set and compared 3 kinds of training methods: batch gradient descent (BGD), stochastic gradient descent (SGD), and batch stochastic gradient descent (BSGD). According to cooperating the neural networks, the performance of these neural networks has improved compared to baseline neural networks by 47% (PEG (generalization classification error probability) of 9 neural networks in cooperation is 0.071). Finally, the real-time classification using a cooperation method which has PEG equals 0.04 (single neural networks’ PEG is 0.104), further proves the results that cooperation improves the performance of neural networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call