Abstract

Light Field (LF) images have the unique advantage of recording scenes from multiple viewpoints, which provides many applications, such as refocusing and depth estimation. However, low-light conditions can severely influence these applications. In this paper, we propose a two-stage deep learning framework for the LF restoration under low-light imaging. First, there is a multi-to-one (MTO) network, which restores each view separately by utilizing multiple auxiliary views. All the views share the same feature extractor, with an efficient spatial-channel attention mechanism to extract more informative features. A channel-attention feature fusion (CAFF) module is designed to selectively fuse more useful complementary information from the auxiliary views, with a learnable global scalar to adjust the importance of the auxiliary features. Then, the outputs of the MTO network are further enhanced by an (all-to-all) ATA network, which uses spatial and angular residual blocks to process all the views synchronously for fully encoding the spatial-angular information. Extensive experiments have been conducted to demonstrate the superior performance and robustness of our method, i.e., it can restore the luminance, spatial details and angular geometries of the LF images under various light levels effectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call