Abstract

In the semiconductor industry, test tapes are used to classify microprocessors into one of several functional categories before packaging. Test tapes must be changed frequently, and so it is necessary make decisions about whether the new tape performs similarly enough to the old tape. The kappa statistic a chance-corrected measure of agreement for categorical data, would be useful for this purpose if a reliable confidence interval procedure were available. Unfortunately, confidence intervals for κ have not been investigated for applications in which the true value is expected to be close to 1, more than two categories are involved, and marginal distributions are not uniform. Simulation was used to study properties of 5 confidence interval procedures for κ under these conditions. Confidence intervals were compared based on average confidence interval length and coverage rate. Although no confidence interval method consistently performed better than the rest, the bias-corrected bootstrap method generally performed well. A square root transformation method also perormed well, and is less computationally-intensive than the bootstrap method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call