Abstract

Supervised learning in neural nets means optimizing synaptic weights W such that outputs y(x;W) for inputs x match as closely as possible the corresponding targets t from the training data set. This optimization means minimizing a loss function {mathscr{L}}(mathbf {W}) that usually motivates from maximum-likelihood principles, silently making some prior assumptions on the distribution of output errors y −t. While classical crossentropy loss assumes triangular error distributions, it has recently been shown that generalized power error loss functions can be adapted to more realistic error distributions by fitting the exponent q of a power function used for initializing the backpropagation learning algorithm. This approach can significantly improve performance, but computing the loss function requires the antiderivative of the function f(y) := yq− 1/(1 − y) that has previously been determined only for natural qin mathbb {N}. In this work I extend this approach for rational q = n/2m where the denominator is a power of 2. I give closed-form expressions for the antiderivative {int limits } f(y) dy and the corresponding loss function. The benefits of such an approach are demonstrated by experiments showing that optimal exponents q are often non-natural, and that error exponents q best fitting output error distributions vary continuously during learning, typically decreasing from large q > 1 to small q < 1 during convergence of learning. These results suggest new adaptive learning methods where loss functions could be continuously adapted to output error distributions during learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call