Abstract

ABSTRACT Computer-aided diagnosis (CAD) systems for the detection of cancer in medical images require precise labelingof training data. For magnetic resonance (MR) imaging (MRI) of the prostate, training labels de“ne the spatialextent of prostate cancer (CaP); the most common source for these labels is expert segmentations. Whenancillary data such as whole mount histology (WMH) sections, which provide the gold standard for cancerground truth, are available, the manual labeling of CaP can be improved by referencing WMH. However, manualsegmentation is error prone, time consuming and not reproducible. Therefore, we present the use of multimodalimage registration to automatically and accurately transcribe CaP from histology onto MRI following alignmentof the two modalities, in order to improve the quality of training data and hence classi“er performance. Wequantitatively demonstrate the superiority of this registration-based methodology by comparing its results tothe manual CaP annotation of expert radiologists. Five supervised CAD classi“ers were trained using the labelsfor CaP extent on MRI obtained by the expert and 4 dierent registration techniques. Two of the registrationmethods were ane schemes; one based on maximization of mutual information (MI) and the other methodthat we previously developed, Combined Feature Ensemble Mutual Information (COFEMI), which incorporateshigh-order statistical features for robust multimodal registration. Two non-rigid schemes were obtained bysucceeding the two ane registration methods with an elastic deformation step using thin-plate splines (TPS).In the absence of de“nitive ground truth for CaP extent on MRI, classi“er accuracy was evaluated against 7ground truth surrogates obtained by dierent combinations of the expert and registration segmentations. For26 multimodal MRI-WMH image pairs, all four registration methods produced a higher area under the receiveroperating characteristic curve compared to that obtained from expert annotation. These results suggest that inthe presence of additional multimodal image information one can obtain more accurate object annotations thanachievable via expert delineation despite vast dierences between modalities that hinder image registration.Keywords: registration, prostate cancer, CAD, dimensionality reduction, mutual information, thin plate splines,non-rigid, COFEMI, histology, MRI, multimodal, independent component analysis, Bayesian classi“er

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call