Abstract

The convolutional neural network is a very important model of deep learning. It can help avoid the exploding/vanishing gradient problem and improve the generalizability of a neural network if the singular values of the Jacobian of a layer are bounded around 1 in the training process. We propose a new Frobenius norm penalty function for a convolutional kernel tensor to let the singular values of the corresponding transformation matrix be bounded around 1. We show how to carry out the gradient-type methods. This provides a potentially useful regularization method for the weights of convolutional layers.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call