Abstract

A new penalized likelihood method (reciprocal elastic net) is put forward for regularization and variable selection. Our proposal is based on a new class of reciprocal penalty functions, combining the strengths of the reciprocal LASSO regularization and ridge regression. We formulate the reciprocal elastic net problem as an equivalent reciprocal LASSO problem on augmented data, facilitating a direct utilization of the reciprocal LASSO algorithm to generate the entire reciprocal elastic net solution path. We further present the reciprocal adaptive elastic net, fusing the merits of ridge regression with the adaptively weighted reciprocal LASSO regularization. These methods, illustrated through simulated examples and real data analysis, demonstrate satisfactory performance in various diversified scenarios compared to published methods. Finally, we propose Bayesian methods to solve the reciprocal elastic net and reciprocal adaptive elastic net models using Gibbs samplers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call