Abstract

In a number of recent studies the Scaled Exponential Linear Unit (SELU) activation function has been shown to automatically regularize network parameters and to make learning robust due to its self-normalizing properties. In this paper we explore the utilization of SELU in training different neural network architectures for recommender systems and validate that it indeed outperforms other activation functions for these types of problems. More interestingly however, we show that SELU also exhibits performance invariance with regards to the selection of the optimization algorithm and its corresponding hyperparameters. This is clearly demonstrated by a number of experiments which involve a number of activation functions and optimization algorithms for training different neural network architectures on standard recommender systems benchmark datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.