Abstract

Learning guarantees are tools for analyzing the performance of statistical learning problems. Most of the existing guarantees are oracle generalization bounds and are proper for theoretical analysis, and they cannot be used directly for various applications. The upper bound on the empirical excess risk is more useful in practice since it is obtained based on the observed data. Therefore, getting a learning guarantee on the empirical process is helpful in practice. In this paper, under the Bernstein condition, we propose a novel approach that uses any oracle generalization bound to calculate an upper bound on the empirical excess risk of the empirical risk minimization (ERM) predictor with the fast convergence rate (O(1n)). Using a tighter general bound, our approach obtains a tighter bound on the empirical excess risk. We show that the proposed empirical excess risk bound is tighter compared to the best in the literature. Using this upper bound, we get a deterministic and known set of predictors such that with a high probability contains the minimum risk predictor. This set can be used in the algorithm design.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call