Abstract

AbstractIn statistical learning, regression and classification concern different types of the output variables, and the predictive accuracy is quantified by different loss functions. This article explores new aspects of Bregman divergence (BD), a notion which unifies nearly all of the commonly used loss functions in regression and classification. The authors investigate the duality between BD and its generating function. They further establish, under the framework of BD, asymptotic consistency and normality of parametric and nonparametric regression estimators, derive the lower bound of their asymptotic covariance matrices, and demonstrate the role that parametric and nonparametric regression estimation play in the performance of classification procedures and related machine learning techniques. These theoretical results and new numerical evidence show that the choice of loss function affects estimation procedures, whereas has an asymptotically relatively negligible impact on classification performance. Applications of BD to statistical model building and selection with non‐Gaussian responses are also illustrated. The Canadian Journal of Statistics 37: 119‐139; 2009 © 2009 Statistical Society of Canada

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call