Abstract
This paper is concerned with functional learning by utilizing two-stage sampled distribution regression. We study a multi-penalty regularization algorithm for distribution regression in the framework of learning theory. The algorithm aims at regressing to real-valued outputs from probability measures. The theoretical analysis of distribution regression is far from maturity and quite challenging since only second-stage samples are observable in practical settings. In our algorithm, to transform information of distribution samples, we embed the distributions to a reproducing kernel Hilbert space HK associated with Mercer kernel K via mean embedding technique. One of the primary contributions of this work is the introduction of a novel multi-penalty regularization algorithm, which is able to capture more potential features of distribution regression. Optimal learning rates of the algorithm are obtained under mild conditions. The work also derives learning rates for distribution regression in the hard learning scenario fρ∉HK, which has not been explored in the existing literature. Moreover, we propose a new distribution-regression-based distributed learning algorithm to face large-scale data or information challenges arising from distribution data. The optimal learning rates are derived for the distributed learning algorithm. By providing new algorithms and showing their learning rates, the work improves the existing literature in various aspects.
Submitted Version (Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.