Abstract

We study the finite sample behavior of Lasso-based inference methods such as post double Lasso and debiased Lasso. Empirically and theoretically, we show that these methods can exhibit substantial omitted variable biases (OVBs) due to Lasso not selecting relevant controls. This phenomenon can be systematic and occur even when the coefficients are very sparse and the sample size is large and larger than the number of controls. Interestingly, we also show that the OVBs can remain bounded even if the Lasso prediction errors are unbounded. We compare the Lasso-based inference methods to modern high-dimensional OLS-based methods and provide practical guidance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.