Abstract

Double-sampling schemes using one classifier assessing the whole sample and another classifier assessing a subset of the sample have been introduced for reducing classification errors when an infallible or gold standard classifier is unavailable or impractical. Inference procedures have previously been proposed for situations where an infallible classifier is available for validating a subset of the sample that has already been classified by a fallible classifier. Here, we consider the case where both classifiers are fallible, proposing and evaluating several confidence interval procedures for a proportion under two models, distinguished by the assumption regarding ascertainment of two classifiers. Simulation results suggest that the modified Wald-based confidence interval, Score-based confidence interval, two Bayesian credible intervals, and the percentile Bootstrap confidence interval performed reasonably well even for small binomial proportions and small validated sample under the model with the conditional independent assumption, and the confidence interval derived from the Wald test with nuisance parameters appropriately evaluated, likelihood ratio-based confidence interval, Score-based confidence interval, and the percentile Bootstrap confidence interval performed satisfactory in terms of coverage under the model without the conditional independent assumption. Moreover, confidence intervals based on log- and logit-transformations also performed well when the binomial proportion and the ratio of the validated sample are not very small under two models. Two examples were used to illustrate the procedures.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call