Abstract
Learning guarantees are tools for analyzing the performance of statistical learning problems. Most of the existing guarantees are oracle generalization bounds and are proper for theoretical analysis, and they cannot be used directly for various applications. The upper bound on the empirical excess risk is more useful in practice since it is obtained based on the observed data. Therefore, getting a learning guarantee on the empirical process is helpful in practice. In this paper, under the Bernstein condition, we propose a novel approach that uses any oracle generalization bound to calculate an upper bound on the empirical excess risk of the empirical risk minimization (ERM) predictor with the fast convergence rate (O(1n)). Using a tighter general bound, our approach obtains a tighter bound on the empirical excess risk. We show that the proposed empirical excess risk bound is tighter compared to the best in the literature. Using this upper bound, we get a deterministic and known set of predictors such that with a high probability contains the minimum risk predictor. This set can be used in the algorithm design.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.