Abstract

While convolutional neural networks (CNNs) have shown potential in segmenting cardiac structures from magnetic resonance (MR) images, their clinical applications still fall short of providing reliable cardiac segmentation. As a result, it is critical to quantify segmentation uncertainty in order to identify which segmentations might be troublesome. Moreover, quantifying uncertainty is critical in real-world scenarios, where input distributions are frequently moved from the training distribution due to sample bias and non-stationarity. Therefore, well-calibrated uncertainty estimates provide information on whether a model's output should (or should not) be trusted in such situations. In this work, we used a Bayesian version of our previously proposed CondenseUNet [1] framework featuring both a learned group structure and a regularized weight-pruner to reduce the computational cost in volumetric image segmentation and help quantify predictive uncertainty. Our study further showcases the potential of our deep-learning framework to evaluate the correlation between the uncertainty and the segmentation errors for a given model. The proposed model was trained and tested on the Automated Cardiac Diagnosis Challenge (ACDC) dataset featuring 150 cine cardiac MRI patient dataset for the segmentation and uncertainty estimation of the left ventricle (LV), right ventricle (RV), and myocardium (Myo) at end-diastole (ED) and end-systole (ES) phases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call