Abstract
Deep learning algorithms modelled with deep architectures have demonstrated impressive results to alleviate common problems of Feedforward Neural Networks, i.e. escaping poor solutions of local minima. Basic architecture concepts behind these powerful models have been set up by preliminary research on Neural Networks, including the application of unsupervised pretraining algorithms and greedy layer-wise strategy and on small scale benchmark datasets. We present practical experiments to evaluate deep classifiers on large scale Super-symmetric (SUSY) particle data sets with variations of noise labelling to examine the robustness of its regularizer. Deep architecture models that are compared in this study include one single hidden layer Multilayer Perceptrons (MLP), Deep stacked Neural Networks (DNN), Stacked Denoising Autoencoders (SDAE), and Deep Belief Networks (DBN). The result shows that deep architecture models (DNN and SDAE) can reduce error gap between low-level and complete feature learning up to 78%, as compared to shallow model (MLP). In our experiment, SDAE can also maintain its superiority in less noisy data set and exhibits nearly linear convergence through the benefits of unsupervised pretraining layers and early stopping of hyperparameter learning.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.