Abstract

Abstract Standardized errors (Ť — T)/v 1/2 were calculated for both ratio and regression estimators in each of 10,000 simple random samples of n = 32 from each of six populations, using four different variance estimators. Graphs show how the percentage of intervals T ± 1.96 v 1/2 that fail to contain T changes as a function of the average value of the auxiliary variable in the sample. They reveal that (a) intervals using the variance estimators from standard linear regression theory were hopelessly unreliable, (b) intervals using the conventional finite population variance estimators showed a striking excess of failures in badly balanced samples, and (c) none of the four variance estimators produced satisfactory confidence intervals in populations arising from badly skewed prediction models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call