Abstract

Abstract Recently multilayer neural networks have been used for still picture compression. In these networks it is necessary to normalize the gray levels in the input picture before they are fed into the neural network. In this paper we investigate six different normalization functions, of which four are new and appear for the first time in this paper. We show that the compression efficiency of a neural network depends on the normalization function used and that the new normalization functions consistently outperform the traditional normalization functions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call