Abstract

While the under-display camera (UDC) system provides an effective solution for notch-free full-screen displays, it inevitably causes severe image quality degradation due to the diffraction phenomenon. Recent methods have achieved decent performance with deep neural networks, yet the characteristic of the point spread function (PSF) is less studied. In this paper, considering the large support and spatial inconsistency of PSF, we propose De2Net for UDC image restoration with feature deconvolution and kernel decomposition. In terms of feature deconvolution, we introduce Wiener deconvolution as a preliminary process, which alleviates feature entanglement caused by the large PSF support. Besides, the deconvolution kernel can be learned from training images, eliminating the tedious PSF-obtaining process. As for kernel decomposition, we observe regular patterns for PSFs at different positions. Thus, with a kernel prediction network (KPN) deployed for handling the spatial inconsistency problem, we improve it from two aspects, i.e., (i) decomposing the predicted kernels into a set of bases and weights, (ii) decomposing kernels into groups with different dilation rates. These modifications largely improve the receptive field under certain memory limits. Extensive experiments on three commonly used UDC datasets show that De2Net outperforms existing methods both quantitatively and qualitatively. Source code and pre-trained models are available at https://github.com/HyZhu39/De2Net.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call