Abstract

Stacked autoencoder (SAE) model, which is one of the deep learning methods, has been widely used in one dimensional data sets in recent years. In this study, a comparative performance analysis was performed using the five most commonly used optimization techniques and two well-known activation functions in SAE architecture. Stochastic Gradient Descent (SGD), Root Mean Square Propagation (RmsProp), Adaptive Moment Estimation (Adam), Adaptive Delta (Adadelta) and Nesterov-accelerated Adaptive Moment Estimation (Nadam) and Softmax and Sigmoid were used as optimization techniques. In this study, two different data sets in public UCI database were used. In order to verify the performance of the SAE model, experimental studies were performed by using the obtained data sets together with optimization and activation techniques separately. As a result of the experimental studies, the success rate of 88.89%, 85.19% in Cryotherapy and Immunotherapy data set was achieved by using Softmax activation function with SGD optimization method on three-layer SAE. After a successful training phase, adaptive optimization techniques Adam, Adadelta, Nadam and RmsProp methods were observed to have a weaker learning process than the stochastic method SGD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call