Abstract

This work examines the application of regularization methods in political science, both theoretically and practically. The focus of the study is on the use of regularization to specify models for predicting political outcomes. Specifically, what is the predictive price for protecting theoretically important variables from shrinkage? While many machine learning applications prioritize maximizing predictive accuracy, political science relies on reliable prior findings as a theoretical foundation. The literature shows that regularization methods can shrink crucial theoretically-driven variables to insignificance. To address this, we propose a protected Lasso approach in a Bayesian framework that safeguards these variables, balancing theoretical robustness and predictive power. Our analyses, applied to the American National Election Study, demonstrates the effectiveness of this approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call