Abstract

Regularization is an essential aspect in the context of deep learning as it mitigates the risk of overfitting in deep neural networks. This study presents a novel deep learning regularization method, referred to DL-Reg, which effectively reduces the nonlinearity of deep networks by enforcing linearity to a certain extent. The proposed method is based on the incorporation of a linear constraint into the objective function of deep neural networks, which is defined as the error of a linear mapping from the inputs to the outputs of the model. Specifically, DL-Reg imposes a linear constraint on the network, which is further adjusted by a regularization factor, thereby preventing the network from overfitting. The effectiveness of DL-Reg is evaluated by training state-of-the-art deep network models on several benchmark datasets. The results of the experiments demonstrate that the proposed regularization method provides significant improvements over existing regularization techniques and enhances the performance of deep neural networks, particularly when dealing with small-sized training datasets. The main code of DL-Reg written in PyTorch is also available here: https://github.com/m2dgithub/DL-Reg.git.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call