Abstract

Regularization is a very effective algorithm to solve overfitting problem in neural network, which improves the generalization ability of the model. However, their working mechanisms and the impact on the model performance have not been fully explored. In this paper, we study and analyze them using information bottleneck theory and one theory from human brain sensory system. We propose a metric to characterise the encoding length of hidden layers, named as AEntry value. Then, we implement extensive experiments on MNIST and FashionMNIST datasets with several commonly used regularization algorithms, and calculate the corresponding AEntry values. We analyze these results and obtain three conclusions. (1) The introduction of regularization influences the encoding of relative features with prediction task in neural network. The early stopping technique avoids introducing unrelated information with the task into the model by stopping the training process as an appropriate iterations. Laplace, Gaussian and Sparse Response regularizations compress the related representation and improve the performance of neural network by introducing the prior information into the model. In contrast, Dropout, Batch Normalization, and Layer Normalization increase the encoding length of features by adopting redundant representation to improve the performance. (2) The encoding of neural network does not satisfy the data processing inequality of information theory, which is mainly caused by redundant coding of extracted features. (3) The overfitting is caused by introducing irrelative information with the target. These results can give us insight into building more efficient regularization algorithm to improve the performance of neural network model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call