Abstract

Artificial neural networks with radial basis functions are used to diagnose patients with multiple sclerosis. But the training of this type of network requires a great amount of time. It would be advantageous if we could speed up this training. The Particle Swarm Optimization (PSO) algorithm was previously used in a particular case, but the results were poor. It was suggested to review this scenario using different network architectures, with other selection criteria for the most significant features of the data, and other training sets. In this paper we follow these ideas and work with various hidden nodes and input nodes. The most significant coefficients were extracted following the Kolmogorov-Smirnov test, the Principal Component Analysis and the Largest Coefficient criteria. The training process used the left-out method. The neural networks were then trained using the Particle Swarm Optimization technique, and the gradient descent algorithm. Once the networks had been trained, we tested them using the original left-out element. We repeated the process, taking averages of all the cases. Our results confirmed that the standard PSO algorithm is not a good method. But then we used the values that gave bad results with the PSO approach as the starting values for a network trained with the gradient descent. In some of the cases, the results were improved when compared to the case of starting with random values.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call