Abstract

Since it can account for both the strength of the association between exposure to a risk factor and the underlying disease of interest and the prevalence of the risk factor, the attributable risk (AR) is probably the most commonly used epidemiologic measure for public health administrators to locate important risk factors. This paper discusses interval estimation of the AR in the presence of confounders under cross-sectional sampling. This paper considers four asymptotic interval estimators which are direct generalizations of those originally proposed for the case of no confounders, and employs Monte Carlo simulation to evaluate the finite-sample performance of these estimators in a variety of situations. This paper finds that interval estimators using Wald's test statistic and a quadratic equation suggested here can consistently perform reasonably well with respect to the coverage probability in all the situations considered here. This paper notes that the interval estimator using the logarithmic transformation, that is previously found to consistently perform well for the case of no confounders, may have the coverage probability less than the desired confidence level when the underlying common prevalence rate ratio (RR) across strata between the exposure and the non-exposure is large (≥4). This paper further notes that the interval estimator using the logit transformation is inappropriate for use when the underlying common RR =˙ 1. On the other hand, when the underlying common RR is large (≥4), this interval estimator is probably preferable to all the other three estimators. When the sample size is large (≥400) and the RR ≥ 2 in the situations considered here, this paper finds that all the four interval estimators developed here are essentially equivalent with respect to both the coverage probability and the average length.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call