Abstract

We investigated the conditions under which the information matrix, conditional on covariates, and the unconditional version, integrated over marginal distribution of covariates, are increasing functions of the measurement error variance, and when the conditional and unconditional asymptotic variances of the maximum likelihood estimate are decreasing functions of the measurement error variance. We say that a paradox occurs when one can decrease the variance of the maximum likelihood estimate by increasing measurement error. Two covariate measurement error models were considered: the Berk-son and the classical measurement error models, with continuous and binary dependent variables, The measurement error variance was assumed known. We found that in the linear model with Berkson eovariate measurement error, the paradox can occur when the model variance is known and quite large. With binary data and the Berkson eovariate measurement error, the paradox is likely to occur for rare events. The paradox based upon the unconditional variance, calculated as the inverse of the unconditional information matrix, does not occur with the classical measurement error model. However, if does occur for the conditional variance, as calculated by the inverse of the observed information matrix, in some circumstances. Empirical evidence for this is illustrated using data from the Nurses' Health Study.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call