Abstract

Deep neural network has been one of the most powerful models in the field of machine learning, which has acquired state-of-the-art results in many tasks including image classification, object detection, text recognition, and so on. There have been many tricks to improve the training and generalization performance of deep neural network, such as dropout, ReLU, batch normalization, etc. In this paper, we proposed a new basic element to form deep neural networks, called learning automata competition unit (LCU). The LCU includes a group of general neural units and learning automata. The adopted learning automata are reinforcement learning methods, which can learn the optimal action through continuously interacting with a stochastic environment. Since the learning automata has powerful policy-making ability for both stochastic and non-stationary environment, the proposed LCU can facilitate competition in a group of neural units and gradually select the better trained neural units during training. The selected neural units through competition can make the training process more efficient, which can simultaneously get better training and generalization performance. The experiments on MNIST, CIFAR-10, and the Reuters newswire topic classification dataset show the performance of our method for both deep fully connected neural network and convolutional neural network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call