Abstract

Images captured in challenging conditions often suffer from the co-existence of low contrast and low resolution. However, most joint enhancement methods focus on fitting a direct mapping from degraded images to high-quality images, which proves insufficient to handle complex degradation. To mitigate this, we propose a novel semantic prior guided interactive network (MSIRNet) to enable effective image representation learning for joint low-light enhancement and super-resolution. Specifically, a local HE-based domain transfer strategy is developed to remedy the domain gap between low-light images and the recognition scope of a generic segmentation model, thereby obtaining a rich granularity of semantic prior. To represent hybrid-scale features with semantic attributes, we propose a multi-grained semantic progressive interaction module that formulates an omnidirectional blend self-attention mechanism, facilitating deep interaction between diverse semantic knowledge and visual features. Moreover, employing our feature normalized complementary module that perceives context and cross-feature relationships, MSIRNet adaptively integrates image features with the auxiliary visual atoms provided by the Codebook, endowing the model with high-fidelity reconstruction capability. Extensive experiments demonstrate the superior performance of our MSIRNet, showing its ability to restore visually and perceptually pleasing normal-light high-resolution images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call