Enhancement of low-light images using Sakaguchi-type function-based cost-effective filtering
Enhancement of low-light images using Sakaguchi-type function-based cost-effective filtering
- Research Article
36
- 10.1016/j.image.2021.116527
- Oct 12, 2021
- Signal Processing: Image Communication
Low-Light Homomorphic Filtering Network for integrating image enhancement and classification
- Research Article
7
- 10.1038/s41598-024-69505-1
- Aug 9, 2024
- Scientific Reports
In low-light environments, the amount of light captured by the camera sensor is reduced, resulting in lower image brightness. This makes it difficult to recognize or completely lose details in the image, which affects subsequent processing of low-light images. Low-light image enhancement methods can increase image brightness while better-restoring color and detail information. A generative adversarial network is proposed for low-quality image enhancement to improve the quality of low-light images. This network consists of a generative network and an adversarial network. In the generative network, a multi-scale feature extraction module, which consists of dilated convolutions, regular convolutions, max pooling, and average pooling, is designed. This module can extract low-light image features from multiple scales, thereby obtaining richer feature information. Secondly, an illumination attention module is designed to reduce the interference of redundant features. This module assigns greater weight to important illumination features, enabling the network to extract illumination features more effectively. Finally, an encoder-decoder generative network is designed. It uses the multi-scale feature extraction module, illumination attention module, and other conventional modules to enhance low-light images and improve quality. Regarding the adversarial network, a dual-discriminator structure is designed. This network has a global adversarial network and a local adversarial network. They determine if the input image is actual or generated from global and local features, enhancing the performance of the generator network. Additionally, an improved loss function is proposed by introducing color loss and perceptual loss into the conventional loss function. It can better measure the color loss between the generated image and a normally illuminated image, thus reducing color distortion during the enhancement process. The proposed method, along with other methods, is tested using both synthesized and real low-light images. Experimental results show that, compared to other methods, the images enhanced by the proposed method are closer to normally illuminated images for synthetic low-light images. For real low-light images, the images enhanced by the proposed method retain more details, are more apparent, and exhibit higher performance metrics. Overall, compared to other methods, the proposed method demonstrates better image enhancement capabilities for both synthetic and real low-light images.
- Research Article
3
- 10.1016/j.cviu.2024.104063
- Jun 22, 2024
- Computer Vision and Image Understanding
Self-supervised network for low-light traffic image enhancement based on deep noise and artifacts removal
- Research Article
7
- 10.1016/j.jvcir.2023.103887
- Jun 30, 2023
- Journal of Visual Communication and Image Representation
Event-guided low light image enhancement via a dual branch GAN
- Research Article
- 10.30518/jav.1448219
- Jun 27, 2024
- Journal of Aviation
The aviation industry is in constant need of innovations in terms of safety and operational efficiency. In this context, low-light image enhancement technologies play an important role in a numerous areas of disciplines, from night flights to accident and collision investigations. Machine learning, deep learning methods and traditional methods not only provide the aviation industry with an effective image processing and improvement capacity in low light conditions, but also reveal important information by analysing the data of low-light images of crashed and destroyed aircraft. Within the scope of the study, traditional methods, deep learning method and machine learning are combined in order to enhance and process low-light ambient images of crashed and destroyed aircraft. By using Swish and Tanh activation functions together in the deep learning model, the performance of the neural networks used in the process of improving low-light environment images was improved and the image quality was increased. The enhanced images were evaluated and compared using PSNR and MSE as objective quality assessment measures. According to the PSNR and MSE criteria, the numerical results obtained from the image enhancement studies of the deep learning model were calculated as 29.85 and 100.44, respectively. The results introduce that the deep learning model provides better image enhancement than traditional methods. In conclusion, improvement of low-light image and processing is an important technological advancement in the aviation industry, enabling safer and more efficient operations. The successful of machine learning include deep learning and traditional methods shows that the aviation industry will achieve a safer and innovative structure in the future.
- Research Article
- 10.3389/fmars.2025.1578735
- Jul 28, 2025
- Frontiers in Marine Science
Obtaining high-quality images of Limulidae in amphibious environments is a challenging task due to insufficient light and the complex optical properties of water, such as light absorption and scattering, which often result in low contrast, color distortion, and blurring. These issues severely impact applications like nocturnal biological monitoring, underwater archaeology, and resource exploration. Traditional image enhancement methods struggle with the complex degradation of such images, but recent advancements in deep learning have shown promise. This paper proposes a novel method for amphibious low-light image enhancement based on hybrid Mamba, which integrates wavelet transform, Discrete Cosine Transform (DCT), and Fast Fourier Transform (FFT) within the Mamba framework. Wavelet transform effectively decomposes images at multiple scales, capturing feature information at different frequencies and excelling in noise removal and detail preservation, whereas DCT concentrates and compresses image energy, aiding in the restoration of high-frequency components and improving clarity. FFT provides efficient frequency domain analysis, accurately locating key information in the image spectrum and enhancing image quality. Mamba, as an emerging optimization strategy, offers unique computational characteristics and optimization capabilities, making it well suited for this task. The main contributions include the construction of the amphibious low-light image dataset (ALID) in collaboration with the Beibu Gulf Key Laboratory of Marine Biodiversity Conservation and the introduction of the hybrid Mamba method. Extensive experiments on the ALID dataset demonstrate that our method outperforms state-of-the-art approaches in both subjective visual assessment and quantitative analysis, achieving superior results in brightness enhancement and detail reconstruction, thus paving new paths for amphibious low-light image processing and promoting further development in related industries and research.
- Research Article
3
- 10.18280/ts.390413
- Aug 31, 2022
- Traitement du Signal
Most computer vision applications demand input images to meet their specific requirements. To complete different vision tasks, e.g., object detection, object recognition, and object retrieval, low-light images must be enhanced by different methods to achieve different processing effects. The existing image enhancement methods, which are based on non-physical imaging models, and image generation methods, which are based on deep learning, are not ideal for low-light image processing. To solve the problem, this paper explores low-light image enhancement and target detection based on deep learning. Firstly, a simplified expression was constructed for the optical imaging model of low-light images, and a Haze-line was proposed for color correction of low-light images, which can effectively enhance low-light images based on the global background light and medium transmission rate of the optical imaging model of such images. Next, network framework adopted by the proposed low-light image enhancement model was introduced in detail: the framework includes two deep domain adaptation modules that realize domain transformation and image enhancement, respectively, and the loss functions of the model were presented. To detect targets based on the output enhanced image, a joint enhancement and target detection method was proposed for low-light images. The effectiveness of the constructed model was demonstrated through experiments.
- Conference Article
6
- 10.1109/ibssc51096.2020.9332217
- Dec 4, 2020
Images captured under poor illumination or at night time doesn’t have significant details as compared to images captured under proper lighting conditions. These images, when used for computer vision applications might be the reason for undesirable output. So, these kinds of images are not suitable for observation and analysis is case of any computer vision application. To solve this problem, visibility enhancement in low light images with weighted fusion of robust retinex model and dark channel prior based enhancement method have been proposed in the literature. The paper proposes visibility enhancement in low light images with weighted fusion of robust retinex model and dark channel prior based enhancement. The validation of proposed method is judged based on entropy. The performance based on the entropy as measure, is evaluated for proposed system and compared with the other existing popular low light image enhancement techniques. For rigorous validation, different weights combinations are explored in the proposed fusion based image enhancement method.
- Research Article
2
- 10.1038/s41598-025-95366-3
- Mar 24, 2025
- Scientific Reports
The raw images captured by underground vision sensors in underground mine settings are disturbed by dim lighting, high dust levels, and complex electromagnetic conditions, suffering from high noise, low illumination, and low-resolution contamination, which further affects the supervision of the vision sensors. However, existing image enhancement methods relying on synthesized datasets are not suitable for improving images in real underground mine settings. This study focuses on addressing these challenges. We collected a large number of underground mine images and proposed a novel image enhancement approach. Inspired by visual image processing techniques, this approach combines low-light enhancement and dehazing methods to address the issues of uneven lighting and fog distortion. Specifically, the proposed Zero-Reference Depth Curve Estimation-Dehazing Network (Z-DCE-DNet) aims to enhance underground mine images. It addresses two key aspects: (1) enhancing low-light images by incorporating higher-order loss curves into the DCE-Net backbone and introducing a new loss function to optimize network learning for improved low-light image quality; (2) addressing the color distortion and blur caused by low light enhancement through post-processing using convolutional neural networks, with AOD-Net enhancing the clarity of downhole images. Extensive experimental results demonstrate that the Z-DCE-DNet method produces visually superior enhanced images, and comparative analyses of multiple object detectors reveal that the enhanced images lead to improved detection outcomes.
- Research Article
- 10.1117/1.jei.32.1.013034
- Feb 16, 2023
- Journal of Electronic Imaging
Digital camera sensors capture natural images in low-light conditions, resulting in poor imaging results. Existing low-light image enhancement (LLE) often yields unnatural results due to over-enhancement, artifacts, severe noise, etc. Prior studies either perform visual dual-pathway mechanisms or Retinex prior optimization for image enhancement. However, image enhancement based on the former generates artifacts because it directly stretches contrast in the structural layer with mixed high- and low-frequency information. The latter results in over-enhancement due to adding empirical prior items to the objective function. Thus, a unified three-pathway framework is proposed to address the aforementioned deficiencies for LLE. Specially, the proposed framework is composed of detail pathway, reflection pathway, and illuminance pathway. First, three information processing pathways can be obtained through different image decomposition strategies. Second, an indirect noise suppression strategy is developed in the computational flow of detail pathway and reflection pathway to address noise amplification problem of image enhancement. Third, the naturalness preservation enhancement task is conducted in the reflection pathway and illuminance pathway. Finally, the outputs of different pathways are weighted and fused to enhance low-light image. Moreover, qualitative and quantitative experimental results on two test datasets show that the proposed framework outperforms state-of-the-art methods.
- Research Article
4
- 10.3390/e26100882
- Oct 21, 2024
- Entropy (Basel, Switzerland)
In extremely dark conditions, low-light imaging may offer spectators a rich visual experience, which is important for both military and civic applications. However, the images taken in ultra-micro light environments usually have inherent defects such as extremely low brightness and contrast, a high noise level, and serious loss of scene details and colors, which leads to great challenges in the research of low-light image and object detection and classification. The low-light night vision image used as the study object in this work has an excessively dim overall picture and very little information about the screen's features. Three algorithms, HE, AHE, and CLAHE, were used to enhance and highlight the image. The effectiveness of these image enhancement methods is evaluated using metrics such as the peak signal-to-noise ratio and mean square error, and CLAHE was selected after comparison. The target image includes vehicles, people, license plates, and objects. The gray-level co-occurrence matrix (GLCM) was used to extract the texture features of the enhanced images, and the extracted image texture features were used as input to construct a backpropagation (BP) neural network classification model. Then, low-light image classification models were developed based on VGG16 and ResNet50 convolutional neural networks combined with low-light image enhancement algorithms. The experimental results show that the overall classification accuracy of the VGG16 convolutional neural network model is 92.1%. Compared with the BP and ResNet50 neural network models, the classification accuracy was increased by 4.5% and 2.3%, respectively, demonstrating its effectiveness in classifying low-light night vision targets.
- Research Article
71
- 10.3390/s23031347
- Jan 25, 2023
- Sensors (Basel, Switzerland)
Convolutional neural network (CNN)-based autonomous driving object detection algorithms have excellent detection results on conventional datasets, but the detector performance can be severely degraded in low-light foggy weather environments. Existing methods have difficulty in achieving a balance between low-light image enhancement and object detection. To alleviate this problem, this paper proposes a foggy traffic environment object detection framework, IDOD-YOLOV7. This network is based on joint optimal learning of image defogging module IDOD (AOD + SAIP) and YOLOV7 detection modules. Specifically, for low-light foggy images, we propose to improve the image quality by joint optimization of image defogging (AOD) and image enhancement (SAIP), where the parameters of the SAIP module are predicted by a miniature CNN network and the AOD module performs image defogging by optimizing the atmospheric scattering model. The experimental results show that the IDOD module not only improves the image defogging quality for low-light fog images but also achieves better results in objective evaluation indexes such as PSNR and SSIM. The IDOD and YOLOV7 learn jointly in an end-to-end manner so that object detection can be performed while image enhancement is executed in a weakly supervised manner. Finally, a low-light fogged traffic image dataset (FTOD) was built by physical fogging in order to solve the domain transfer problem. The training of IDOD-YOLOV7 network by a real dataset (FTOD) improves the robustness of the model. We performed various experiments to visually and quantitatively compare our method with several state-of-the-art methods to demonstrate its superiority over the others. The IDOD-YOLOV7 algorithm not only suppresses the artifacts of low-light fog images and improves the visual effect of images but also improves the perception of autonomous driving in low-light foggy environments.
- Research Article
15
- 10.1109/tip.2024.3486610
- Jan 1, 2024
- IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Low-light image enhancement aims to improve the visual quality of images captured under poor illumination. However, enhancing low-light images often introduces image artifacts, color bias, and low SNR. In this work, we propose AnlightenDiff, an anchoring diffusion model for low light image enhancement. Diffusion models can enhance the low light image to well-exposed image by iterative refinement, but require anchoring to ensure that enhanced results remain faithful to the input. We propose a Dynamical Regulated Diffusion Anchoring mechanism and Sampler to anchor the enhancement process. We also propose a Diffusion Feature Perceptual Loss tailored for diffusion based model to utilize different loss functions in image domain. AnlightenDiff demonstrates the effect of diffusion models for low-light enhancement and achieving high perceptual quality results. Our techniques show a promising future direction for applying diffusion models to image enhancement.
- Research Article
4
- 10.18280/ts.390313
- Jun 30, 2022
- Traitement du Signal
We know that a vast amount of research has recently been done on dehazing single images. More work is done on day-time images than night-time images. Also, enhancement of low light images is another area in which lots of research going on. In this paper, a simple yet effective unified variational model is proposed for dehazing of day and night images and low-light enhancement based on non-local global variational regularization. Given the relation between image dehazing and retinex, the haze removal process can minimize a variational retinex model. Estimating of ambient light and transmission maps is a key step in modern dehazing methods. Atmospheric light is not uniform and constant for hazy night images, as night scenes often contain multiple light sources. Often lit and non-illuminated regions have different colour characteristics and cause total variation colour distortion and halo artifacts. Our work directly implements a non-local retinal model based on the L2 norm that simulates the average activity of inhibitory and excitatory neuronal populations in the cortex to overcome this problem. This potential biological feasibility of the L2 norm of our work is divided into two parts using a filtered gradient approach, the reflection sparse prior and the reflection gradient fidelity before the observed image gradient. This unified framework of NLTV-Retinex and DCP efficiently performs low-light enhancement and dehazing of day and night images. We show results obtained using our method on daytime and night-time images and a low-light image dataset. We quantitatively and qualitatively compare our results with recently reported methods, which demonstrate the effectiveness of our method.
- Research Article
66
- 10.1016/j.image.2022.116848
- Aug 28, 2022
- Signal Processing: Image Communication
Comparing deep learning models for low-light natural scene image enhancement and their impact on object detection and classification: Overview, empirical evaluation, and challenges
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.