Abstract

Autoencoders are used in a variety of safety-critical applications. Uncertainty quantification is a key component to bolster the trustworthiness of such models. With the growing complexity of the autoencoder design and the dataset they are trained on, there is a dwindling correlation between the input and feature space representation. To address this latent space degeneracy, we propose a novel method of monotonically perturbing the encoded latent space to increase the entropy in the learned representations for every corresponding input. For every perturbation, we obtain a unique decoded signature corresponding to an evaluation metric in the continuous domain, which can be clustered to build a knowledge base and subsequently analyzed for outlier analysis. For the test cases, in the absence of ground truth, we can perturb the latent space representation and find the closest match of the test cases’ unique signatures to the existing knowledge base for uncertainty quantification and outlier detection. We evaluate our proposed novel method on glomeruli segmentation for frozen kidney donor section on whole slide imaging, a safety-critical application in digital pathology which serves as a precursor to kidney transplantation. We prove the proposed method’s effectiveness for outlier detection by ranking the test cases according to their associated uncertainties to leverage the attention of medical experts on boundary cases.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.