Abstract

Activation functions such as Tanh and Sigmoid functions are widely used in Deep Neural Networks (DNNs) and pattern classification problems. To take advantages of different activation functions, the Broad Autoencoder Features (BAF) is proposed in this work. The BAF consists of four parallel-connected Stacked Autoencoders (SAEs) and each of them uses a different activation function, including Sigmoid, Tanh, ReLU, and Softplus. The final learned features can merge such features by various nonlinear mappings from original input features with such a broad setting. This helps to excavate more information from the original input features. Experimental results show that the BAF yields better-learned features and classification performances.

Highlights

  • With the quick advancement and deployment of information technologies, there are gigantic volumes of information in various organizations accessible on the Internet, for example, video, image, and medical data

  • Experimental results show that the broad autoencoder features (BAF) yields better-learned features and classification performances

  • In this paper, we propose the BAF to deal with the image pattern classification problems

Read more

Summary

INTRODUCTION

With the quick advancement and deployment of information technologies, there are gigantic volumes of information in various organizations accessible on the Internet, for example, video, image, and medical data. Generative models, e.g. the Autoencoder (AE) (Bengio et al, 2009), the Variational Autoencoder (Kingma and Welling, 2014; Mescheder et al, 2017; Tan et al, 2018), and the Important Weighted Autoencoders (Burda et al, 2016) have created to utilize direct back-proliferation for preparing and maintaining a strategic distance from challenges yielded by the MCMC preparation Each of these strategies considered as the projection that yields a considerable classification result by anticipating tests from the original feature space into a projected space with a better class-separability for pattern classification problems (Wasikowski et al, 2010). The convergence speed of SGD obtained by using relu will be much faster than sigmoid/tanh while softplus can be regarded as the smoothed version of relu These motivate us to utilize advantages of different activation functions, and we propose the Broad Autoencoders Features (BAF) extend the superiority of the DAF.

RELATED WORK
BROAD AUTOENCODER FEATURES
Training of the BAF
Computational Complexity of the BAF
EXPERIMENTAL RESULTS
Methods
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.