Abstract

In a typical machine learning problem one has to build a model from a finite training set which is able to generalize the properties characterizing the examples of the training set to new examples. The model has to reflect as much as possible the set of training examples but, especially in real-world problems in which the data are often corrupted by different sources of noise, it has to avoid a too strict dependence on the training examples themselves. Recent studies on the relationship between this kind of learning problem and the regularization theory for ill-posed inverse problems have given rise to new regularized learning algorithms. In this paper we recall some of these learning methods and we propose an accelerated version of the classical Landweber iterative scheme which results particularly efficient from the computational viewpoint. Finally, we compare the performances of these methods with the classical Support Vector Machines learning algorithm on a real-world experiment concerning brain activity interpretation through the analysis of functional magnetic resonance imaging data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.