Abstract
Computer-aided diagnostic (CAD) schemes have been developed for assisting radiologists in the detection of various lesions in medical images. The reliable evaluation of CAD schemes is as important as the development of such schemes in the field of CAD research. In the past, many evaluation approaches, such as the resubstitution, leave-one-out, cross-validation, and hold-out methods, have been proposed for evaluating the performance of various CAD schemes. However, some important issues in the evaluation of CAD schemes have not been analyzed systematically, either theoretically or experimentally. Such important issues include (1) the analysis and comparison of various evaluation methods in terms of some characteristics, in particular, the bias and the generalization performance of trained CAD schemes; (2) the analysis of pitfalls in the incorrect use of various evaluation methods and the effective approaches to reduction of the bias and variance caused by these pitfalls; (3) the improvement of generalizability for CAD schemes trained with limited datasets. This article consists of a series of three closely related studies that address the above three issues. We believe that this article will be useful to researchers in the field of CAD research who can improve the bias and generalizability of their CAD schemes.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.