Abstract

In recent years, artificial neural networks (ANNs), which are based on the foundational model established by McCulloch and Pitts in 1943, have been at the forefront of computational research. Despite their prominence, ANNs have encountered a number of challenges, including hyperparameter tuning and the need for vast datasets. It is because many strategies have predominantly focused on enhancing the depth and intricacy of these networks that the essence of the processing capabilities of individual neurons is occasionally overlooked. Consequently, a model emphasizing a biologically accurate dendritic neuron model (DNM) that mirrors the spatio-temporal features of real neurons was introduced. However, while the DNM shows outstanding performance in classification tasks, it struggles with complexities in parameter adjustments. In this study, we introduced the hyperparameters of the DNM into an evolutionary algorithm, thereby transforming the method of setting DNM’s hyperparameters from the previous manual adjustments to adaptive adjustments as the algorithm iterates. The newly proposed framework, represents a neuron that evolves alongside the iterations, thus simplifying the parameter-tuning process. Comparative evaluation on benchmark classification datasets from the UCI Machine Learning Repository indicates that our minor enhancements lead to significant improvements in the performance of DNM, surpassing other leading-edge algorithms in terms of both accuracy and efficiency. In addition, we also analyzed the iterative process using complex networks, and the results indicated that the information interaction during the iteration and evolution of the DNM follows a power-law distribution. With this finding, some insights could be provided for the study of neuron model training.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call