Abstract

In long-range shooting during low-light, we can only obtain a Low-Resolution (LR) image with poor visibility due to the limitation of physical devices. To get high-quality distant images, optical lenses are the optimal choice, but they are quite expensive and bulky. Low-light enhancement and Super-Resolution (SR) are the necessity of mobile phones and surveillance cameras and have wide applications in video surveillance, remote sensing, and night photography. Usually, the images that are taken from long-range in low-light suffer the loss of details not only due to low photon count but also due to low Signal-to-Noise Ratio (SNR), which makes it a highly ill-posed problem. Therefore, to resolve these two problems simultaneously, we propose a Lightening Super-Resolution (LSR) deep network. The proposed network uses the back-projection to learn the enhanced and dark features in low-resolution space iteratively and up-sample the enhanced features at the last stage of the network to get the final enhanced and high-resolution image. In particular, to train the network, the low-light images of the publicly available LOw Light (LOL) dataset are down-sampled using bicubic interpolation, and ground truth images are used as enhanced high-resolution images. For a fair comparison, a series of already available SR networks have been trained on the mentioned dataset, and their performances are compared. The promising results open up many opportunities for future work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call