Abstract

I consider the estimation of the average treatment effect (ATE), in a population that can be divided into $G$ groups, and such that one has unbiased and uncorrelated estimators of the conditional average treatment effect (CATE) in each group. These conditions are for instance met in stratified randomized experiments. I assume that the outcome is homoscedastic, and that each CATE is bounded in absolute value by $B$ standard deviations of the outcome, for some known constant $B$. I derive, across all linear combinations of the CATEs' estimators, the estimator of the ATE with the lowest worst-case mean-squared error. This optimal estimator assigns a weight equal to group $g$'s share in the population to the most precisely estimated CATEs, and a weight proportional to one over the CATE's variance to the least precisely estimated CATEs. This optimal estimator is feasible: the weights only depend on known quantities. I then allow for positive covariances known up to the outcome's variance between the estimators. This condition is met in differences-in-differences designs, if errors are homoscedastic and uncorrelated. Under those assumptions, I show that the minimax estimator is still feasible and can easily be computed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call