Abstract
The performance of two algorithms may be compared using an asymptotic technique in algorithm analysis, which focuses largely on the growth rate as the number of inputs grows. Sigmoid activation and ReLU activation functions are widely employed in ANNs (Yingying, 2020), and each has advantages and disadvantages that should be considered when designing ANN solutions for a given issue. This study aimed to compare the performance of sigmoid activation and ReLU activation function during training using an asymptotic approach. The work focuses on training time complexity as the basis of comparison of the two activation functions using an asymptotic approach. The result derived from this study showed that sigmoid activation function takes more computation time in performing forward path, loss computation and backward propagation than ReLU activation functions. The computation cost will become significant when dealing with deep neural networks with hundreds to thousands of neurons. Overall, the training time for ReLU based Neural network will be better than that of sigmoid based one. Sigmoid have higher computational cost compared to ReLU but the two algorithms have a linear growth rate. Keyword: Back propagation, Loss computation, Sigmoid Activation, ReLU activation, ANNs DOI: 10.7176/NCS/13-05 Publication date: June 30th 2022
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.