Abstract
An autoencoder is a layered neural network whose structure can be viewed as consisting of an encoder, which compresses an input vector to a lower dimensional vector, and a decoder, which transforms the low-dimensional vector back to the original input vector (or one that is very similar). In this article, we explore the compressive power of autoencoders that are Boolean threshold networks by studying the numbers of nodes and layers that are required to ensure that each vector in a given set of distinct input binary vectors is transformed back to its original. We show that for any set of n distinct vectors there exists a seven-layer autoencoder with the optimal compression ratio, (i.e., the size of the middle layer is logarithmic in n ), but that there is a set of n vectors for which there is no three-layer autoencoder with a middle layer of logarithmic size. In addition, we present a kind of tradeoff: if the compression ratio is allowed to be considerably larger than the optimal, then there is a five-layer autoencoder. We also study the numbers of nodes and layers required only for encoding, and the results suggest that the decoding part is the bottleneck of autoencoding. For example, there always is a three-layer Boolean threshold encoder that compresses n vectors into a dimension that is twice the logarithm of n .
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE transactions on neural networks and learning systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.