Abstract

Regularisation has become an important tool in statistical modelling. In particular the challenge of high dimensional data boosted the fitting of more and more complex models that can not be fitted without appropriate regularisation. The need for regularisation, however, is not restricted to the modelling of high dimensional data, it is mainly driven by the complexity of the model. When the model includes nonparametric function estimation regularisation restricts the class of functions that are fitted. In regression and classification complexity of the model is usually determined by the structuring of the predictor. Regularisation helps to identify the relevant parts, which can consist of simple linear terms, functions, parametric or nonparametric interaction terms, or complex spatially and temporally structured terms. Regularisation can be made explicit by using penalty terms that restrict estimates or can be implicitly determined by the algorithm as for example in boosting methods. This issue of Statistics and Computing collects ten papers that focus on regularisation methods in different areas and with different methodology. Two papers are devoted to the extension of boosting techniques. In the paper “Twin Boosting: Improved Feature Selection and Prediction” P. Buhlmann and T. Hothorn propose a boosting method that consists of two rounds of common boosting with the second boosting process being forced to resemble the first round of boosting. The method shows much better feature selection behaviour than common boosting, in particular as far as false positives are concerned. The paper “Estimation and Regularization Techniques for Regression Models with Multidimensional Prediction Functions”

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call