Abstract

Images captured under low light conditions often suffer from various degradations. The Retinex models are highly effective in enhancing low-light images. The analytical optimization models are interpretable but inflexible to various scenes. The data-driven learning models are flexible to various scenes but less interpretable. To reconcile the advantages of both, we propose a parametric Retinex model with pixel-wise varying parameters. Then we unroll its iterative algorithm into an unfolding network so that the parameters can be learned. We call it Deep Parametric REtinex Decomposition (DPRED). Based on the Retinex decomposition, we present a novel network for low-light image enhancement, also called DPRED. The whole network comprises three modules: parametric Retinex decomposition, enhancement and refinement. The first two modules operate on the V channel in the HSV space, avoiding color deviation. The refinement module aims to remove noise in the enhanced RGB image. Extensive experiments demonstrate the proposed method is effective in low-light image enhancement and it significantly outperforms recent baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call