Abstract
Theoretical analysis of the divide-and-conquer based distributed learning with least square loss in the reproducing kernel Hilbert space (RKHS) have recently been explored within the framework of learning theory. However, the studies on learning theory for general loss functions and hypothesis spaces remain limited. In order to comprehend the properties and behaviors of distributed ERM, we introduce multi-view learning to explore it at a more profound level. In this work, we adopt an innovative perspective by examining the distributed ERM from multi-view, and study the risk performance of distributed ERM for general loss functions and hypothesis spaces. The main theoretical results are two-fold. First, we derive two tight risk bounds under certain basic assumptions on the hypothesis space, as well as the smoothness, Lipschitz continuity, strong convexity of the loss function. Second, we further develop two more general risk bounds for distributed ERM without the restriction of strong convexity. The present work not only bridges the knowledge deficit in learning theory of distributed ERM for general loss functions and hypothesis spaces, from multi-view sight, it also shows that under certain conditions the number of views can guarantee the performance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.