Abstract

Multivariate adaptive regression spline (MARS) denotes a modern methodology from statistical learning which is important in both classification and regression. It is very useful for high-dimensional problems and shows a great promise for fitting nonlinear multivariate functions by using its ability to estimate the contributions of the basis functions so that both the additive and the interactive effects of the predictors are allowed to determine the response variable. The MARS algorithm for estimating the model function consists of two sub-algorithms. In our paper, we propose not to use second algorithm. Instead, we construct a penalized residual sum of squares (PRSS) for MARS as a higher-order Tikhonov regularization problem which is also known as ridge regression that shrinks coefficients and make them more stable. But it cannot perform variable selection in the model and, hence, does not give an easily interpretable model (especially, if the number of variable p is large). For this reason, we change the Tikhonov penalty function with the generalized Lasso penalty for solving the problem PRSS, taking an advantage for feature selection. We treat this problem using continuous optimization techniques which we consider to become an important complementary technology and model-based alternative to the concept of the backward stepwise algorithm. In particular, we apply the elegant framework of conic quadratic programming (CQP), and we call the solution as CG-Lasso. Here, we gain from an area of convex optimization whose programs are very well-structured, herewith, resembling linear programming and, hence, permitting the use of powerful interior point methods (IPMs).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call