Radiology offers a presumptive diagnosis. The etiology of radiological errors are prevalent, recurrent, and multi-factorial. The pseudo-diagnostic conclusions can arise from varying factors such as, poor technique, failures of visual perception, lack of knowledge, and misjudgments. This retrospective and interpretive errors can influence and alter the Ground Truth (GT) of Magnetic Resonance (MR) imaging which in turn result in faulty class labeling. Wrong class labels can lead to erroneous training and illogical classification outcomes for Computer Aided Diagnosis (CAD) systems. This work aims at verifying and authenticating the accuracy and exactness of the GT of biomedical datasets which are extensively used in binary classification frameworks. Generally such datasets are labeled by only one radiologist. Our article adheres a hypothetical approach to generate few faulty iterations. An iteration here considers simulation of faulty radiologist's perspective in MR image labeling. To achieve this, we try to simulate radiologists who are subjected to human error while taking decision regarding the class labels. In this context, we swap the class labels randomly and force them to be faulty. The experiments are carried out on some iterations (with varying number of brain images) randomly created from the brain MR datasets. The experiments are carried out on two benchmark datasets DS-75 and DS-160 collected from Harvard Medical School website and one larger input pool of self-collected dataset NITR-DHH. To validate our work, average classification parameter values of faulty iterations are compared with that of original dataset. It is presumed that, the presented approach provides a potential solution to verify the genuineness and reliability of the GT of the MR datasets. This approach can be utilized as a standard technique to validate the correctness of any biomedical dataset.
Read full abstract