Abstract

We propose a light and fast method to restore degraded images captured from under‐display cameras (UDCs) by modifying a branched deep neural network (BNUDC). The existing BNUDC is trained to minimize the per‐pixel difference of intensity between a ground truth and a restored image using L1 or L2 loss, and it has a large number of learnable parameters for achieving state‐of‐theart performance in terms of numerical distances such as PSNR and SSIM. Commercializing a deep‐learning‐based image restoration requires high perceptual image quality and real‐time inference speed. To this end, we downscale the original BNUDC to restore a 2k resolution image within 44ms which is the frame time for a 24Hz video, and train our network to reconstruct perceptually‐pleasing images using ‘perceptual optimization’. Our model has less than 1M learnable parameters, its inference time is less than 40ms, and it outperforms the original network in terms of perceptual image quality.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call