Abstract
Neural network is a popular method used in machine research, and activation functions, especially ReLu and Tanh, have a very important function in neural networks, to minimize the error value between the output layer and the target class. With variations in the number of hidden layers, as well as the number of neurons in each different hidden layer, this study analyzes 8 models to classify the Titanic's Survivor dataset. The result is that the ReLu function has a better performance than the Tanh function, seen from the average value of accuracy and precision which is higher than the Tanh activation function. The addition of the number of hidden layers has no effect on increasing the performance of the classification results, it can be seen from the decrease in the average accuracy and precision of the models that use 3 hidden layers and models that use 4 hidden layers. The highest accuracy value was obtained in the model using the ReLu activation function with 4 hidden layers and 50 neurons in each hidden layer, while the highest precision value was obtained in the model using the ReLu activation function with 4 hidden layers and 100 neurons in each hidden layer
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.