Profiting from the surprising non-linear expressive capacity, deep convolutional neural networks have inspired lots of progress in low illumination (LI) remote sensing image enhancement. The key lies in sufficiently exploiting both the specific long-range (e.g., non-local similarity) and short-range (e.g., local continuity) structures distributed across different scales of each input LI image to build an appropriate deep mapping function from the LI images to their corresponding high-quality counterparts. However, most existing methods can only individually exploit the general long-range or short-range structures shared across most images at a single scale, thus limiting their generalization performance in challenging cases. We propose a multi-scale long–short range structure aggregation learning network for remote sensing imagery enhancement. It features flexible architecture for exploiting features at different scales of the input low illumination (LI) image, with branches including a short-range structure learning module and a long-range structure learning module. These modules extract and combine structural details from the input image at different scales and cast them into pixel-wise scale factors to enhance the image at a finer granularity. The network sufficiently leverages the specific long-range and short-range structures of the input LI image for superior enhancement performance, as demonstrated by extensive experiments on both synthetic and real datasets.
Read full abstract