Abstract

Back propagation neural network (BPNN) is one of the most basic and commonly used models in machine learning. Hidden layers play a crucial function in maximizing the performance of neural networks, especially when solving complicated issues that demand strict adherence to accuracy and time complexity requirements. The only reliable ways at the moment are just experience and attempting each situation, as the process of determining the amount of Hidden Layer neurons is still unclear. To investigate this relationship, this article conducted extensive experiments involving designing and training the BPNN model with varying numbers of hidden layer neurons. Leverage benchmark data sets and quantify accuracy with appropriate error metrics. The analysis in this article focuses on understanding the impact of different neuron counts on network performance. Under specific assumptions, the findings show a relationship between the quantity of hidden layer neurons and BPNN accuracy. According to statistics of some recent neural network applications, it can be observed that if the number of neurons in the hidden layer is decreased, it will have an effect on the network's accuracy because complex problems with a small number of hidden layers may cause the network to be incorrectly trained; However, relative to the accuracy gain, the time complexity rises orders of magnitude as the number of hidden layer neurons exceeds the ideal amount.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call