Abstract

Uncertainty in deep learning has recently received a lot of attention. While deep neural networks have shown better accuracy than other competing methods in many benchmarks, it has been shown that they may yield wrong predictions with unreasonably high confidence. This has increased the interest in methods that help providing better confidence estimates in neural networks, some using specifically designed architectures with probabilistic building blocks, and others using a standard architecture with an additional confidence estimation step based on its output. This work proposes a confidence estimation method for Convolutional Neural Networks based on fitting a forest of randomized density estimation decision trees to the network activations before the final classification layer and compares it to other confidence estimation methods based on standard architectures. The methods are compared on a semantic labelling dataset with very high resolution satellite imagery. Our results show that methods based on intermediate network activations lead to better confidence estimates in novelty detection, i.e., in the discovery of classes that are not present in the training set.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.