Abstract

In this paper, we present a novel way of pre-training deep architectures by using the stochastic least squares autoencoder (SLSA). The SLSA is based on the combination of stochastic least squares estimation and logistic sampling. The usefulness of the stochastic least squares approach coupled with the numerical trick of constraining the logistic sampling process is highlighted in this paper. This approach was tested and benchmarked against other methods including Neural Nets (NN), Deep Belief Nets (DBN), and Stacked Denoising Autoencoder (SDAE) using the MNIST dataset. In addition, the SLSA architecture was also tested against established methods such as the Support Vector Machine (SVM), and the Naive Bayes Classifier (NB) on the Reuters-21578 and MNIST datasets. The experiments show the promise of SLSA as a pre-training step, in which stacked of SLSA yielded the lowest classification error and the highest F-measure scores on the MNIST and Reuters-21578 datasets respectively. Hence, this paper establishes the value of pre-training deep neural network, by using the SLSA.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.