Abstract
Learning theory aims at building a solid mathematical foundation for machine learning. A core objective of learning theory is to estimate the learning rates of various learning algorithms in order to analyze their generalization ability. By far, most such research efforts have been focusing on single‐task kernel methods. There is little parallel work on learning rates of multitask kernel methods. We shall present an analysis of the learning rates for multitask regularization networks and ‐norm coefficient regularization. Compared to the existing work on learning rate estimates of multitask regularization networks, our study is more applicable in that we do not require the regression function to lie in the vector‐valued reproducing kernel Hilbert space of the chosen matrix‐valued reproducing kernel. Our work on the learning rate of multitask ‐norm coefficient regularization is new. For both methods, our results reveal a quantitative dependency of the learning rates on the number of tasks, which is also new in the literature.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.