Abstract

A sparse auto-encoder is one of effective algorithms for learning features from unlabeled data in deep neural-network learning. In conventional sparse auto-encoder training, for each layer of a deep neural- network, all feature units are simultaneously constructed at the beginning and after being trained, several similar/ redundant features are obtained at the end of the learning process. In this paper, we propose a novel alternative method for learning features of each layer of the network; our method incrementally constructs features by adding primitive/simple features first and then gradually learns finer/more complicated features. We believe that using our proposed method, more variety of features can be obtained that will lead to the performance of the network. We run experiments on the MNIST data set. The experimental results show that sparse auto-encoders using our in- cremental feature construction provides better accuracy than a sparse auto-encoder using the conventional feature construction. Moreover, the shapes of our obtained features contain both primitive strokes/lines as well as finer curves/more complicated shapes which comprise the digits, as expected. 

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call