Abstract

Despite significant advances in deep neural networks across diverse domains, challenges persist in safety-critical contexts, including domain shift sensitivity and unreliable uncertainty estimation. To address these issues, this study investigates Bayesian learning for uncertainty handling in modern neural networks. However, the high-dimensional, non-convex nature of the posterior distribution poses practical limitations for epistemic uncertainty estimation. The Laplace approximation, as a cost-efficient Bayesian method, offers a practical solution by approximating the posterior as a multivariate normal distribution but faces computational bottlenecks in precise covariance matrix computation and storage. This research employs subnetwork inference, utilizing only a subset of the parameter space for Bayesian inference. In addition, a Kronecker-factored and low-rank representation is explored to reduce space complexity and computational costs. Several corrections are introduced to converge the approximated curvature to the exact Hessian matrix. Numerical results demonstrate the effectiveness and competitiveness of this method, whereas qualitative experiments highlight the impact of Hessian approximation granularity and parameter space utilization in Bayesian inference on mitigating overconfidence in predictions and obtaining high-quality uncertainty estimates.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.