Abstract

In a typical machine learning problem one has to build a model from a finite training set which is able to generalize the properties characterizing the examples of the training set to new examples. The model has to reflect as much as possible the set of training examples but, especially in real-world problems in which the data are often corrupted by different sources of noise, it has to avoid a too strict dependence on the training examples themselves. Recent studies on the relationship between this kind of learning problem and the regularization theory for ill-posed inverse problems have given rise to new regularized learning algorithms. In this paper we recall some of these learning methods and we propose an accelerated version of the classical Landweber iterative scheme which results particularly efficient from the computational viewpoint. Finally, we compare the performances of these methods with the classical Support Vector Machines learning algorithm on a real-world experiment concerning brain activity interpretation through the analysis of functional magnetic resonance imaging data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call