Abstract

Combining machine learning with econometric analysis is becoming increasingly prevalent in both research and practice. A common empirical strategy uses predictive modeling techniques to “mine” variables of interest from available data and then includes those variables into an econometric framework to estimate causal effects. However, because the predictions from machine learning models are inevitably imperfect, econometric analyses based on the predicted variables likely suffer from bias due to measurement error. We propose a novel approach to mitigate these biases, leveraging the random forest technique. We propose using random forest not just for prediction but also for generating instrumental variables for bias correction. The random forest algorithm performs best when comprised of a set of trees that are individually accurate in their predictions, yet which also make “different” mistakes, that is, have weakly correlated prediction errors. A key observation is that these properties are closely related to the relevance and exclusion requirements of valid instrumental variables. We design a data-driven procedure to select tuples of individual trees from a random forest, in which one tree serves as the endogenous covariate and the others serve as its instruments. Simulation experiments demonstrate its efficacy in mitigating estimation biases and its superior performance over alternative methods. History: David Martens served as the senior editor for this article. Data Ethics & Reproducibility Note: The code capsule is available on Code Ocean at https://codeocean.com/capsule/7039927/tree/v1 and in the e-Companion to this article (available at https://doi.org/10.1287/ijds.2022.0019 ).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call