Abstract

Deep neural networks have achieved state-of-the-art performance on numerous applications in the medical field, with use-cases ranging from automation of mundane tasks to diagnosis of life-threatening diseases. Despite these achievements, deep neural networks are considered “black boxes” due to their complex structure and general lack of transparency in their decision-making process. These attributes make it challenging to incorporate deep learning into existing clinical workflows as decisions often need more support than blind faith in a statistical model. This paper presents an investigation of uncertainty estimation for the detection of colon polyps using deep convolutional neural networks (CNNs). We experiment with two different approaches to measure uncertainty, Monte Carlo (MC) dropout and deep ensembles, and discuss the advantages and disadvantages of both methods in terms of computational efficiency and performance gain. Furthermore, we apply the two uncertainty methods to two different state-of-the-art CNN-based polyp segmentation architectures. The uncertainty is visualized as heatmaps on the input images and can be used to make more informed decisions on whether or not to trust a model's predictions. The results show that the predictive uncertainties provide a comparison between different models' predictions which can be interpreted as contrastive explanations where the values are largely influenced by the degree of independence between the models in the ensemble. We also reveal that MC dropout is shown to lack at providing contrastive uncertainty values due to the high correlation between the models' in the ensemble.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call