Abstract
Computer-aided diagnostic (CAD) schemes have been developed for assisting radiologists in the detection of various lesions in medical images. Many evaluation approaches, such as the resubstitution, leave-one-out, cross-validation, and hold-out methods, have been employed for the assessment of the performance of various CAD schemes. For these evaluation methods, some investigators have studied their bias in the estimated performance levels of CAD schemes trained with finite samples. However, systematical study has not been conducted for the comparison of these common evaluation methods in terms of multiple important characteristics such as the bias of the estimated performance, the generalization performance, and the uniqueness of the trained CAD scheme. Therefore, in this study, we examined and compared these important characteristics for various evaluation methods and attempted to provide a guideline for investigators to select appropriate evaluation methods for the assessment of CAD schemes in typical practical situations.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.