Abstract

The Laplacian regularization has been widely used in neural networks due to its ability to improve generalization performance, which enforces adjacent samples with the same labels to share similar features. However, most existing methods only consider the global structure of the data with the same labels, but neglect samples in boundary areas with different labels. To address this limitation and improve performance, this paper proposes a novel regularization method that enhances the hidden structure of deep neural networks. Our proposed method imposes a double Laplacian regularization on the objective function and leverages full data information to capture its hidden structure in the manifold space. The double Laplacian regularization applies both attraction and repulsion effects on the hidden layer, which encourages the hidden features of instances with the same label to be closer, and forces those of different categories to be further away.Extensive experiments demonstrate the proposed method leads to significant improvements in accuracy on different types of deep neural networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call