A Cnn-Transformer Network Based Snr Guided High Frequency Reconstruction for Low Light Image Enhancement

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Photographs taken in low-light conditions have a low signal-to-noise ratio and impaired visual quality. We observe that low-light images exhibit a lower signal-to-noise ratio, resulting in a mixture of fine details, textures, and noise, making it challenging to reconstruct small-scale textures in the image. Inspired by this observation, we propose a SNR-guided CNN-Transformer network for high frequency restoration during low light image enhancement. The proposed method first decomposes image into high-frequency and low frequency components by image decomposition module. The low-frequency image is processed by a trainable Low Frequency SNR Perception (LFSP) module, resulting in excellent denoising performance and generating SNR-enhanced images with clearer edge contours. Guided by the low-frequency SNR feature maps, the details and textures of the high-frequency components are enhanced using a combination of transformer networks and convolutional networks, thereby compensating the detail distortions in the high frequency components of the image. The subjective and objective experiments demonstrate that our proposed method outperforms existing approaches in terms of detail and structure preservation.

Similar Papers
  • Research Article
  • Cite Count Icon 46
  • 10.1109/lra.2020.3048667
A Two-Stage Unsupervised Approach for Low Light Image Enhancement
  • Oct 1, 2021
  • IEEE Robotics and Automation Letters
  • Junjie Hu + 5 more

As vision based perception methods are usually built on the normal light assumption, there will be a serious safety issue when deploying them into low light environments. Recently, deep learning based methods have been proposed to enhance low light images by penalizing the pixel-wise loss of low light and normal light images. However, most of them suffer from the following problems: 1) the need of pairs of low light and normal light images for training, 2) the poor performance for dark images, 3) the amplification of noise. To alleviate these problems, in this letter, we propose a two-stage unsupervised method that decomposes the low light image enhancement into a pre-enhancement and a post-refinement problem. In the first stage, we pre-enhance a low light image with a conventional Retinex based method. In the second stage, we use a refinement network learned with adversarial training for further improvement of the image quality. The experimental results show that our method outperforms previous methods on four benchmark datasets. In addition, we show that our method can significantly improve feature points matching and simultaneous localization and mapping in low light conditions.

  • Research Article
  • Cite Count Icon 24
  • 10.1016/j.displa.2023.102614
A survey on learning-based low-light image and video enhancement
  • Dec 7, 2023
  • Displays
  • Jing Ye + 2 more

A survey on learning-based low-light image and video enhancement

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/stcr55312.2022.10009364
Low-light Color Image Enhancement based on Dark Channel Prior with Retinex Model
  • Dec 10, 2022
  • Sameena + 1 more

Low light image enhancement plays the crucial role in night vision applications, and road monitoring systems of artificial intelligence assisted vehicles. But the conventional methods are unable to remove the darkness from source images and resulted in poor visibility performance. Thus, this article proposed an advanced low light image enhancement approach using dark channel prior (DCP). Initially, light reflection (retinex) angles are identified and red channel estimation was used to restore light direction attention. Further, DCP is used to identify the background darkness region with light illumination properties. Then, new anthropic light properties were generated by using transmission map estimation and refinement. Further, image light radiance is recovered by using this updated transmission map values, which generates darkness removed image. Finally, denoising operation is performed to get the best visual quality output image. The simulations conducted on ExDark dataset shows that the proposed method resulted in superior subjective and objective performance as compared to state of art approaches.

  • Research Article
  • Cite Count Icon 9
  • 10.1016/j.cviu.2024.104063
Self-supervised network for low-light traffic image enhancement based on deep noise and artifacts removal
  • Jun 22, 2024
  • Computer Vision and Image Understanding
  • Houwang Zhang + 3 more

Self-supervised network for low-light traffic image enhancement based on deep noise and artifacts removal

  • Research Article
  • Cite Count Icon 10
  • 10.1016/j.jvcir.2023.103887
Event-guided low light image enhancement via a dual branch GAN
  • Jun 30, 2023
  • Journal of Visual Communication and Image Representation
  • Haiyan Jin + 3 more

Event-guided low light image enhancement via a dual branch GAN

  • Research Article
  • Cite Count Icon 28
  • 10.1109/tip.2024.3486610
AnlightenDiff: Anchoring Diffusion Probabilistic Model on Low Light Image Enhancement.
  • Jan 1, 2024
  • IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
  • Cheuk-Yiu Chan + 3 more

Low-light image enhancement aims to improve the visual quality of images captured under poor illumination. However, enhancing low-light images often introduces image artifacts, color bias, and low SNR. In this work, we propose AnlightenDiff, an anchoring diffusion model for low light image enhancement. Diffusion models can enhance the low light image to well-exposed image by iterative refinement, but require anchoring to ensure that enhanced results remain faithful to the input. We propose a Dynamical Regulated Diffusion Anchoring mechanism and Sampler to anchor the enhancement process. We also propose a Diffusion Feature Perceptual Loss tailored for diffusion based model to utilize different loss functions in image domain. AnlightenDiff demonstrates the effect of diffusion models for low-light enhancement and achieving high perceptual quality results. Our techniques show a promising future direction for applying diffusion models to image enhancement.

  • Research Article
  • Cite Count Icon 8
  • 10.1109/access.2022.3227069
Low Light Image Enhancement Based on Multi-Scale Network Fusion
  • Jan 1, 2022
  • IEEE Access
  • Xuan Liu + 8 more

At present, researchers have made great progress in the research of object detection, however, these studies mainly focus on the object detection of images under normal lighting, ignoring the target detection under low light. And images in the fields of automatic driving at night and surveillance are usually obtained in low-light environments. These images have problems such as poor brightness, low contrast, and obvious noise, which lead to a large amount of information loss in the image. And the performance of object detection in low light is reduced. In this paper, we propose a low-light image enhancement method based on multi-scale network fusion to solve the problems of images in low-light environments. Aiming at the problem that the effective information of low-light images is relatively small, we propose a preprocessing method for image nonlinear transformation and fusion, which improves the amount of available information in the light image. Then, in order to obtain a better enhancement effect, a multi-scale feature fusion method is proposed, which fuses features from different resolution levels in the network. The details of low-light areas in the image are improved, and the problem of feature loss caused by too deep network layers is solved. The experimental results show that our proposed method can achieve better enhancement effects on different datasets compared with the current mainstream methods. The average recall value of the object detection with our method is improved by 38.25%, which shows that our proposed method is effective and can promote the development of autonomous driving, monitoring, and other fields.

  • Research Article
  • Cite Count Icon 20
  • 10.1016/j.neucom.2022.12.007
A deep thermal-guided approach for effective low-light visible image enhancement
  • Dec 6, 2022
  • Neurocomputing
  • Yanpeng Cao + 6 more

A deep thermal-guided approach for effective low-light visible image enhancement

  • Conference Article
  • 10.1109/icbar58199.2022.00008
Improved Retinex-based low light image enhancement algorithm
  • Nov 1, 2022
  • Haotian Liu + 1 more

Low-light image enhancement is one of the most challenging tasks in computer vision. Traditional unsupervised learning methods typically use an image-to-image transformation model to solve low-light image enhancement problems but are unable to suppress the noise prevalent in images captured in real-world low-light conditions. In order to solve a series of degradation problems such as low brightness, high noise and weak contrast of low light images, this paper proposes a new network architecture to enhance the low light images. The whole network includes three sub networks: decomposition, denoising and enhancement. The decomposition network decomposes the image into illumination image and reflection image. The denoising network denoises the reflection image in the frequency domain. The enhancement network enhances the illumination map through several convolution operations. Finally, the denoised reflection image and the enhanced illumination image are multiplied pixel by pixel to obtain the result image. Experiments show that the proposed method in this paper can effectively improve the brightness and contrast, remove noise and it has obvious advantages in subjective and objective evaluation indexes.

  • Research Article
  • Cite Count Icon 2
  • 10.1038/s41598-025-92161-y
A hybrid framework for curve estimation based low light image enhancement
  • Mar 12, 2025
  • Scientific Reports
  • Yutao Jin + 7 more

Images captured in low-light conditions often suffer from poor visibility and noise corruption. Low-light image enhancement (LLIE) aims to restore the brightness of under-exposed images. However, most previous LLIE solutions enhance low-light images via global mapping without considering various degradations of dark regions. Besides, these methods rely on convolutional neural networks for training, which have limitations in capturing long-range dependencies. To this end, we construct a hybrid framework dubbed hybLLIE that combines transformer and convolutional designs for LLIE task. Firstly, we propose a light-aware transformer (LAFormer) block that utilizes brightness representations to direct the modeling of valuable information in low-light regions. It is achieved by utilizing a learnable feature reassignment modulator to encourage inter-channel feature competition. Secondly, we introduce a SeqNeXt block to capture the local context, which is a ConvNet-based model to process sequences of image patches. Thirdly, we devise an efficient self-supervised mechanism to eliminate inappropriate features from the given under-exposed samples and employ high-order curves to brighten the low-light images. Extensive experiments demonstrate that our HybLLIE achieves comparable performance to 17 state-of-the-art methods on 7 representative datasets.

  • Book Chapter
  • 10.1007/978-981-19-9819-5_30
FLIME—Fast Low Light Image Enhancement for Real-Time and Low-Compute Environments Using a Data-Centric Approach
  • Jan 1, 2023
  • P Vinay + 1 more

Low light image enhancement is a nontrivial task that finds application in autonomous driving, night vision devices for defence systems and other tasks that involve low light object detection. A majority of the existing solutions proposed, based on deep learning models, are resource intensive and take considerable time to process. Consequently, these are not suitable for processing a sequence of images for real time or even near-real-time video applications. This paper presents FLIME, a fast and efficient solution for the enhancement of low light images, which is amenable for use with a real-time video feed, and in low-compute environments because of its lightweight nature. FLIME is a pipeline of two steps: in the first step, a model is used to map input RGB values to output RGB values and, in the next step, contrast adjustment of the image is effected. The crux of the method comprises a linear transformation of the input RGB values. Since the processing uses a data-centric approach, a carefully curated dataset comprising images taken in low and bright light conditions is used in this study. Design considerations that underlie the preparation of the dataset are presented with the details of the proposed solution. Experimental results that include a qualitative comparison of the images enhanced by FLIME as well as a quantitative comparison of their PSNR, SSIM values and processing times against those of other enhancement methods on publicly available data for this purpose demonstrate the efficacy of the proposed solution.

  • Research Article
  • Cite Count Icon 4
  • 10.1038/s41598-025-95366-3
A hybrid zero-reference and dehazing network for joint low-light underground image enhancement
  • Mar 24, 2025
  • Scientific Reports
  • Qing Du + 4 more

The raw images captured by underground vision sensors in underground mine settings are disturbed by dim lighting, high dust levels, and complex electromagnetic conditions, suffering from high noise, low illumination, and low-resolution contamination, which further affects the supervision of the vision sensors. However, existing image enhancement methods relying on synthesized datasets are not suitable for improving images in real underground mine settings. This study focuses on addressing these challenges. We collected a large number of underground mine images and proposed a novel image enhancement approach. Inspired by visual image processing techniques, this approach combines low-light enhancement and dehazing methods to address the issues of uneven lighting and fog distortion. Specifically, the proposed Zero-Reference Depth Curve Estimation-Dehazing Network (Z-DCE-DNet) aims to enhance underground mine images. It addresses two key aspects: (1) enhancing low-light images by incorporating higher-order loss curves into the DCE-Net backbone and introducing a new loss function to optimize network learning for improved low-light image quality; (2) addressing the color distortion and blur caused by low light enhancement through post-processing using convolutional neural networks, with AOD-Net enhancing the clarity of downhole images. Extensive experimental results demonstrate that the Z-DCE-DNet method produces visually superior enhanced images, and comparative analyses of multiple object detectors reveal that the enhanced images lead to improved detection outcomes.

  • Research Article
  • Cite Count Icon 4
  • 10.18280/ts.390413
Low-Light Image Enhancement and Target Detection Based on Deep Learning
  • Aug 31, 2022
  • Traitement du Signal
  • Zhuo Yao

Most computer vision applications demand input images to meet their specific requirements. To complete different vision tasks, e.g., object detection, object recognition, and object retrieval, low-light images must be enhanced by different methods to achieve different processing effects. The existing image enhancement methods, which are based on non-physical imaging models, and image generation methods, which are based on deep learning, are not ideal for low-light image processing. To solve the problem, this paper explores low-light image enhancement and target detection based on deep learning. Firstly, a simplified expression was constructed for the optical imaging model of low-light images, and a Haze-line was proposed for color correction of low-light images, which can effectively enhance low-light images based on the global background light and medium transmission rate of the optical imaging model of such images. Next, network framework adopted by the proposed low-light image enhancement model was introduced in detail: the framework includes two deep domain adaptation modules that realize domain transformation and image enhancement, respectively, and the loss functions of the model were presented. To detect targets based on the output enhanced image, a joint enhancement and target detection method was proposed for low-light images. The effectiveness of the constructed model was demonstrated through experiments.

  • Research Article
  • Cite Count Icon 540
  • 10.1109/tpami.2021.3126387
Low-Light Image and Video Enhancement Using Deep Learning: A Survey.
  • Dec 1, 2022
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Chongyi Li + 6 more

Low-light image enhancement (LLIE) aims at improving the perception or interpretability of an image captured in an environment with poor illumination. Recent advances in this area are dominated by deep learning-based solutions, where many learning strategies, network structures, loss functions, training data, etc. have been employed. In this paper, we provide a comprehensive survey to cover various aspects ranging from algorithm taxonomy to unsolved open issues. To examine the generalization of existing methods, we propose a low-light image and video dataset, in which the images and videos are taken by different mobile phones' cameras under diverse illumination conditions. Besides, for the first time, we provide a unified online platform that covers many popular LLIE methods, of which the results can be produced through a user-friendly web interface. In addition to qualitative and quantitative evaluation of existing methods on publicly available and our proposed datasets, we also validate their performance in face detection in the dark. This survey together with the proposed dataset and online platform could serve as a reference source for future study and promote the development of this research field. The proposed platform and dataset as well as the collected methods, datasets, and evaluation metrics are publicly available and will be regularly updated. Project page: https://www.mmlab-ntu.com/project/lliv_survey/index.html.

  • Research Article
  • Cite Count Icon 3
  • 10.11648/j.ijdsa.20200604.11
Low Light Image Enhancement for Dark Images
  • Jan 1, 2020
  • International Journal of Data Science and Analysis
  • Akshay Patil + 4 more

Image plays an important role in this present technological world and leads to progress in multimedia communication, various research fields related to image processing, etc. Low-light image enhancement specifically addresses images captured in low-light conditions such as nighttime, where the common goal is to brighten and improve the contrast of the image for better visual quality and show details that are hidden in darkness. Research fields that may assist us in lowlight environments, such as object detection, has glossed over this aspect even though breakthroughs-after breakthroughs had been achieved in recent years, most noticeably from the lack of low-light data (less than 2% of the total images) in successful public benchmark datasets such as PASCAL VOC, ImageNet, and Microsoft COCO. To improve image quality, these low-light images are needed to be enhanced. For this purpose, an exclusively dark dataset comprising of images captured in visible light only is proposed. Further, dehazing technique is used for haze removal, histogram equalization (HE) technique is used for contrast enhancement and denoising technique is used for noise removal. Experimental results demonstrate that the proposed method achieves a good performance in low light image enhancement and outperforms state-of-the-art ones in terms of contrast enhancement and noise reduction.

Save Icon
Up Arrow
Open/Close