Abstract
A large body of research has been devoted to variable selection in recent years. Bayesian methods have been successful in applications, particularly in settings where the amount of measured variables can be much greater than the number of observations. This chapter reviews mixture priors that employ a point mass distribution at zero for variable selection in regression settings. The popular stochastic search MCMC algorithm with add-delete-swap moves is described and posterior inference and prediction via Bayesian model averaging are briefly discussed. Regression models for non Gaussian data, including binary, multinomial, survival and compositional count data are also addressed. Prior constructions that take into account specific structures in the covariates are described. These constructions have been particularly successful in applications as they allow the integration of different sources of biological information into the analysis. A discussion of computational strategies, in particular variational algorithms for scalable inference, concludes the chapter. Throughout the chapter, some emphasis is given to the author's contribution.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.