Restoration of architectural mural images under low-light conditions via Multi-level Interactive Siamese Filtering
Restoration of architectural mural images under low-light conditions via Multi-level Interactive Siamese Filtering
- Research Article
- 10.1038/s40494-025-01635-9
- Mar 6, 2025
- npj Heritage Science
Ancient murals represent invaluable heritage, providing deep insights into historic culture. However, these murals are increasingly at risk due to long-term degradation caused by oxidation and inadequate protection, and other factors, resulting in damages such as peeling and mold. Furthermore, the challenge posed by low-light conditions during image capture exacerbates the analyses and the restoration process, making it difficult to effectively identify and repair defects. To tackle these pressing challenges and facilitate efficient batch restoration at archeological sites, we propose a two-stage restoration model named MER. First, our model employs an innovative illumination enhancement module to improve the lighting of low-light mural images. Second, an automatic defect detection strategy, combined with a multi-receptive field approach, is utilized to systematically restore the identified defects. Comprehensive evaluations demonstrate that our MER model significantly enhances the visual quality of the restored images and achieves superior performance on relevant metrics compared to existing methods. Our works highlight the importance of addressing both lighting issues and defect detection in ancient mural restoration. Furthermore, we have launched a website dedicated to the restoration of ancient mural paintings, utilizing the proposed model. Code is available at https://gitee.com/bbfan2024/MER.git.
- Conference Article
1
- 10.1109/crv.2016.29
- Jun 1, 2016
Very low-light conditions are problematic for current robotic vision algorithms as captured images are subject to high levels of ISO noise. We propose a novel deep-structured stochastically fully-connected conditional random field (DSFCRF) model for image restoration in very low-light conditions. The DSFCRF model combines the improved performance of deep-structured graphical models with the reduced complexity of stochastically fully-connected random fields. The proposed model was compared to state-of-the-art image restoration methods using a set of images contaminated with synthetically generated noise and a set of natural images captured in very low-light conditions. Experimental results indicate the potential of DSFCRFs for low-light image restoration.
- Research Article
5
- 10.7498/aps.71.20220099
- Jan 1, 2022
- Acta Physica Sinica
When capturing images under low-light lighting conditions, the resulting images often suffer low visibility. Such low-visibility images not only affect the visual effect but also cause many difficulties in practical application. Therefore, image enhancement technology under low-light conditions has always been a challenging problem in image algorithms. Considering that most of the existing image enhancement methods are based on the RGB color space enhancement technology, the correlation among the RGB three primary colors is ignored, which makes the color distortion phenomenon easy to occur when the image is enhanced. To solve the problems of poor image visibility and color deviation under low-light conditions, in this paper an advanced Retinex network enhancement method is proposed. In the method, firstly the low-light RGB image is transformed into HSV color space, the Retinex decomposition network is used to decompose and enhance the value component separately, and thus increasing the resolution of the value component through up-sampling operation; then, for the hue component and saturation component, the nearest neighbor interpolation is used to increase their resolutions, and the enhanced value component is combined to convert back to RGB color space to obtain the initial enhanced image; finally, the wavelet transform image fusion technology is used to fuse with the original low-light image to eliminate the over-enhanced part in the initial enhanced image. The analysis of experimental results shows that the method proposed in this paper has obvious advantages in brightness enhancement and color restoration of low-light images. Especially, comparing with the original Retinex network method, the NIQE value decreases by an average of 19.49%, and the image standard deviation increases by an average of 41.35%. The algorithm proposed in this paper is expected to be effectively used in the fields of security monitoring and biomedicine.
- Conference Article
3
- 10.1117/12.172092
- Apr 4, 1994
Even in confocal scanning, longitudinal resolution is poorer than lateral resolution. It is therefore of interest to go `beyond confocal' and achieve still better optical sectioning by image restoration methods. In our previous work we applied two methods to simulated 3D microscope images: the constrained Jansson-van Cittert (JVC) method, which is a deterministic regularized image restoration algorithm, and the expectation-maximization (EM) algorithm, which is a method to obtain the maximum likelihood solution of the restoration problem under Poisson image statistics. In this paper we apply both the JVC algorithm and the EM algorithm to real image data obtained from our laser scanning confocal microscope. Slices of the original and restored images agree with our earlier numerical simulations. Specifically: (a) optical sectioning is improved by both algorithms; (b) the JVC restoration is noisier than the image restored with the EM algorithm, showing the advantage of the ML approach under low light conditions; (c) noise in the EM restoration shows that regularization is still needed.
- Conference Article
1
- 10.1117/12.506715
- Nov 20, 2003
Time Delay and Integration (TDI) sensors scan the image in one dimensions using a rectangular sensor array that integrates multiple time-delayed exposures of the same object. Due to physical constraints the TDI sensor element may have a staggered structure, in which the odd and the even sensors are horizontally separated. TDI image acquisition systems are usually employed in low signal to noise situations such as low light conditions or thermal imaging, or when high-speed readout is required. This work deals with analysis and restoration of images acquired by thermal staggered TDI sensors in the presence of mechanical vibrations. Vibrations during such an image acquisition process cause space variant image distortions in the scanning direction. These distortions include geometric warps (such as interlace comb effects) and blur. This situation is different from common case where the image degradation caused by motion is modeled as space invariant and can be treated by de-convolution techniques. The relative motion at each location in the degraded image is identified from the image using a differential technique. This information is then used to reconstruct the image using projection onto convex stes (POCS) technique. A main novelty in this work is the implementation of such methods to scanned images (column-wise). Restorations are performed with simulated images and with real mechanically degraded thermal images.
- Conference Article
5
- 10.1109/dicta.2015.7371306
- Nov 1, 2015
This paper is concerned with automatically fusing multiple noisy and partially corrupted source images into a single denoised image. To create the fused image we minimise a convex objective function, which ensures spatial smoothness through total variation regularisation, and similarity to the source images via pixel-wise selective regularisation against each of the source images. We call this approach Selective Multi-Source Total Variation Image Restoration (SMTV). Applications of SMTV include noise removal in low-light conditions, enhancement of images from low quality or damaged imaging sensors and haze or cloud removal from satellite imagery. Experimental evaluation demonstrates that the fusion of multiple images results in a more accurate recovery than single image restoration.
- Research Article
1
- 10.1016/j.imavis.2024.105035
- Apr 23, 2024
- Image and Vision Computing
Localization-aware logit mimicking for object detection in adverse weather conditions
- Research Article
1
- 10.1609/aaai.v35i18.17944
- May 18, 2021
- Proceedings of the AAAI Conference on Artificial Intelligence
Deep learning based methods have achieved remarkable success in image restoration and enhancement, but a majority of such methods rely on RGB input images. These methods fail to take into account the rich spectral distribution of natural images. We propose a deep architecture, SpecNet which computes spectral profile to estimate pixel-wise dynamic range adjustment of a given image. First, we employ an unpaired cycle-consistent framework to generate hyperspectral images (HSI) from low-light input images. HSI are further used to generate a normal light image of the same scene. In order to infer a plausible HSI from a RGB image we incorporate a self-supervision and a spectral profile regularization network. We evaluate the benefits of optimizing the spectral profile for real and fake images in low-light conditions on the LOL Dataset.
- Research Article
- 10.1609/aaai.v39i8.32887
- Apr 11, 2025
- Proceedings of the AAAI Conference on Artificial Intelligence
Enhancing images captured under low-light conditions has been a topic of research for several years. Nonetheless, existing image restoration techniques mainly concentrate on reconstructing images from RGB data, often neglecting the possibility of utilizing additional modalities. With the progress in handheld technology, capturing thermal images with mobile devices has become straightforward. Investigating the integration of thermal data into image restoration presents a valuable research opportunity. Therefore, in this paper, we propose a multimodal low-light image enhancement task based on thermal information and establish a dataset named TLIE (Thermal-aware Low-light Image Enhancement), consisting of 1,113 samples. Each sample in our dataset includes a low-light image, a normal-light image, and the corresponding thermal map. Additionally, based on TLIE dataset, we develop a multimodal approach that simultaneously processes input images and thermal map data to produce the predicted normal-light images. We compare our method with previous unimodal and multimodal state-of-the-art LIE methods, and the experimental results and detailed ablation studies prove the effectiveness of our method.
- Research Article
10
- 10.1016/j.compag.2024.109169
- Jun 17, 2024
- Computers and Electronics in Agriculture
Low-light wheat image enhancement using an explicit inter-channel sparse transformer
- Research Article
12
- 10.1016/j.patcog.2023.109344
- Jan 15, 2023
- Pattern Recognition
A High Dynamic Range Imaging Method for Short Exposure Multiview Images
- Research Article
- 10.1088/1742-6596/1992/2/022106
- Aug 1, 2021
- Journal of Physics: Conference Series
Image restoration is an important part of the research in the field of computer vision. Its purpose is to automatically recover lost content based on known content in mural images, in mural image editing, film and television special effects production, virtual reality and digital cultural heritage protection. The field has a wide range of application values. In the method of image restoration based on deep learning, the design of the deep learning network and the selection of the loss function in the training process are important contents. Each method has its own advantages and disadvantages and its scope of application. How to improve the semantics of the repair result? The correctness of sex, structure and detail has always been the direction of researchers’ efforts. Based on this purpose, this paper summarizes the main features, existing problems, requirements of training samples, main application fields and references through various methods. Code. Based on the research of deep learning mural image restoration, some significant progress has been made. However, the application of deep learning in mural image restoration is still in its infancy. The main research content is only the mural image content information of the mural image itself to be repaired. Therefore, the restoration of mural image based on deep learning is still a challenging subject. How to design a universal repair network to improve the accuracy of the repair results requires more in-depth research.
- Research Article
2
- 10.1109/tpami.2024.3432308
- Dec 1, 2024
- IEEE transactions on pattern analysis and machine intelligence
Low-light image enhancement (LLIE) investigates how to improve the brightness of an image captured in illumination-insufficient environments. The majority of existing methods enhance low-light images in a global and uniform manner, without taking into account the semantic information of different regions. Consequently, a network may easily deviate from the original color of local regions. To address this issue, we propose a semantic-aware knowledge-guided framework (SKF) that can assist a low-light enhancement model in learning rich and diverse priors encapsulated in a semantic segmentation model. We concentrate on incorporating semantic knowledge from three key aspects: a semantic-aware embedding module that adaptively integrates semantic priors in feature representation space, a semantic-guided color histogram loss that preserves color consistency of various instances, and a semantic-guided adversarial loss that produces more natural textures by semantic priors. Our SKF is appealing in acting as a general framework in the LLIE task. We further present a refined framework SKF++ with two new techniques: (a) Extra convolutional branch for intra-class illumination and color recovery through extracting local information and (b) Equalization-based histogram transformation for contrast enhancement and high dynamic range adjustment. Extensive experiments on various benchmarks of LLIE task and other image processing tasks show that models equipped with the SKF/SKF++ significantly outperform the baselines and our SKF/SKF++ generalizes to different models and scenes well. Besides, the potential benefits of our method in face detection and semantic segmentation in low-light conditions are discussed.
- Conference Article
16
- 10.1109/cvprw56347.2022.00078
- Jun 1, 2022
Object detection in low-light conditions remains a challenging but important problem with many practical implications. Some recent works show that, in low-light conditions, object detectors using raw image data are more robust than detectors using image data processed by a traditional ISP pipeline. To improve detection performance in low-light conditions, one can fine-tune the detector to use raw image data or use a dedicated low-light neural pipeline trained with paired low- and normal-light data to restore and enhance the image. However, different camera sensors have different spectral sensitivity and learning-based models using raw images process data in the sensor-specific color space. Thus, once trained, they do not guarantee generalization to other camera sensors. We propose to improve generalization to unseen camera sensors by implementing a minimal neural ISP pipeline for machine cognition, named GenISP, that explicitly incorporates Color Space Transformation to a device-independent color space. We also propose a two-stage color processing implemented by two image-to-parameter modules that take down-sized image as input and regress global color correction parameters. Moreover, we propose to train our proposed GenISP under the guidance of a pre-trained object detector and avoid making assumptions about perceptual quality of the image, but rather optimize the image representation for machine cognition. At the inference stage, GenISP can be paired with any object detector. We perform extensive experiments to compare our proposed method to other low-light image restoration and enhancement methods in an extrinsic task-based evaluation and validate that GenISP can generalize to unseen sensors and object detectors. Finally, we contribute a low-light dataset of 7K raw images annotated with 46K bounding boxes for task-based benchmarking of future low-light image restoration and low-light object detection.
- Research Article
21
- 10.1186/s40494-022-00771-w
- Aug 31, 2022
- Heritage Science
Mural is an important component of culture and art of Dunhuang in China. Unfortunately, these murals had been ruined or are being ruined by some diseases such as cracking, hollowing, falling off, mildew, dirt, and so on. Existing image restoration algorithms have problems such as incomplete repair and disharmonious texture during large-area repair, so the effect of mural image disease area repair is poor. Due to lack of a standard mural datasets, Dunhuang mural datasets are created in the paper. Meanwhile, our network architecture SeparaFill is proposed which connects two generators based on U-Net. Based on the characteristics of the painting, the contour line pixel area of the mural image is innovatively separated from the content pixel area. Firstly, the contour restoration generator network with skip connect and hierarchical residual blocks is employed to repair contour lines. Then, the color mural image is repaired by the content completion network with guide of the repaired contour. Full resolution branches and generator branches of the U type are exploited in content completion generators. Convolution layers of different kernel sizes are fused to improve the reusability of the underlying features. Finally, global and local discriminant networks are applied to determine whether the repaired mural image is authentic in terms of both the modified and unmodified areas. The proposed SeparaFill shows good performance in restoring the line structure of the damaged mural images and retaining the contour information of the mural images. Compared with existing restoration algorithms in mural real damage repair experiment, our algorithm increases the peak signal-to-noise ratio (PSNR) by an average of 1.1–4.3 dB and the structural similarity (SSIM) values were slightly improved. Experimental results reveal the good performance of the proposed model, which can contribute to digital restorations of ancient murals.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.