Abstract

In this article we consider nonparametric estimation of a structural equation model under full additivity constraint. We propose estimators for both the conditional mean and gradient which are consistent, asymptotically normal, oracle efficient, and free from the curse of dimensionality. Monte Carlo simulations support the asymptotic developments. We employ a partially linear extension of our model to study the relationship between child care and cognitive outcomes. Some of our (average) results are consistent with the literature (e.g., negative returns to child care when mothers have higher levels of education). However, as our estimators allow for heterogeneity both across and within groups, we are able to contradict many findings in the literature (e.g., we do not find any significant differences in returns between boys and girls or for formal versus informal child care). Supplementary materials for this article are available online.

Highlights

  • We develop oracle efficient estimators for additive nonparametric structural equation models

  • We show that our estimators of the conditional mean and gradient are consistent, asymptotically normal and free from the curse of dimensionality

  • We consider a partially linear extension of our model which we use in an empirical application relating child care to test scores

Read more

Summary

Introduction

Nonparametric and semiparametric estimation of structural equation models is becoming increasingly popular in the literature (e.g., Ai and Chen, 2003; Chen and Pouzo, 2012; Darolles et al, 2011; Gao and Phillips, 2013; Hall and Horowitz, 2005; Martins-Filho and Yao, 2012; Newey and Powell, 2003; Newey et al, 1999; Pinkse, 2000; Roehrig, 1988; Su and Ullah, 2008; Su et al, 2013; Vella, 1991). We impose an additivity constraint on each stage and propose a three step estimation procedure for our additively separable nonparametric structural equation model. The first-stage involves separate (additive) regressions of each endogenous regressor on each of the exogenous regressors in order to obtain consistent estimates of the residuals. Our final-step (one stage backfitting) involves (univariate) local-linear kernel regressions to estimate the conditional mean and gradient of each of our additive components. This process allows our final-stage estimators to be free from the curse of dimensionality. Each additive component can be estimated with the same asymptotic accuracy as if all the other components in the regression model were known up to a location parameter (e.g., see Henderson and Parmeter, 2014, Horowitz, 2014, or Li and Racine, 2007)

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call