Abstract

Deep learning algorithms mostly have network parameters that can affect their training results, and the combination of neural network architectures also has a significant impact on the algorithm performance. The performance of deep learning algorithms is usually proportional to the overall number of network parameters, leading to excessive resource consumption for exploring neural network architectures with a large number of hyper-parameters. To solve this problem, a vector representation is proposed which for neural network architectures, and a multi-objective optimization model is established based on genetic algorithms in this paper, and it is short for “NNOO Vector Representation based on GA and Its Optimization Method”. The multi-objective optimization model can automatically optimize the neural network architecture and hyper-parameters in the network, improve the network accuracy, and reduce the overall number of network parameters. It is shown in the test results with the MNIST data set, and the accuracy is 95.61% for the traditional empirical setting network, and the average accuracy is 86.2% for the network optimized by TensorFlow’s optimization algorithm. While the network accuracy is improved to 96.86% with the proposed optimization method in this paper and the network parameters are reduced by 32.6% compared with the traditional empirical network, and the network parameters are reduced by13.2% compared with the network by TensorFlow’s optimization algorithm. Therefore, the method is presented which has obvious practical application value in neural network optimization problems and provides a new way of thinking for large and deep network optimization problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call