Abstract

Uncertainty estimation is critical for numerous deep neural network (DNN) applications and has drawn increasing attention from researchers. In this study, we demonstrated an uncertainty quantification approach for DNNs used in inverse problems based on cycle consistency. We built forward–backward cycles using the available physical forward model and a trained DNN solving the inverse problem at hand and accordingly derived uncertainty estimators through regression analysis on the consistency of these forward–backward cycles. We theoretically analyzed the cycle consistency metrics and derived their relationship with the uncertainty, bias, and robustness of neural network inference. To demonstrate the effectiveness of these cycle-consistency-based uncertainty estimators, we classified corrupted and out-of-distribution input image data using widely used image deblurring and super-resolution neural networks as test beds. Our blind tests demonstrated that our method surpassed other models in detecting previously unseen data corruption and distribution shifts. This study provides a simple-to-implement and rapid uncertainty quantification method that can be universally applied to various neural networks used to solve inverse problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call