Abstract

We present a new technique for assessing the effectiveness of a classification algorithm using discordant pair analysis. This method utilizes a known performance baseline algorithm and a large unlabeled dataset with an assumed class distribution to obtain overall performance estimates by only assessing the subset of examples that the algorithms classify discordantly. Our approach offers an efficient way to evaluate the performance of an algorithm that minimizes the human adjudications needed while also maintaining precision in the evaluation and in some cases improving the evaluation quality by reducing human adjudication errors. This approach is a computationally efficient alternative to the traditional exhaustive method of performance evaluation and has the potential to improve the accuracy of performance estimates. Simulation studies show that the discordant pair method reduces the number of adjudications by over 90%, while maintaining the same level of sensitivity and specificity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call