Heterogeneous synthetic aperture radar image domain generalized target identification method considering feature separation and diffusion
Heterogeneous synthetic aperture radar image domain generalized target identification method considering feature separation and diffusion
- Research Article
70
- 10.1109/tgrs.2019.2930322
- Dec 1, 2019
- IEEE Transactions on Geoscience and Remote Sensing
Change detection in heterogeneous remote sensing images is an important but challenging task because of the incommensurable appearances of the heterogeneous images. In order to solve the change detection problem in optical and synthetic aperture radar (SAR) images, this paper proposes an improved method that combines cooperative multitemporal segmentation and hierarchical compound classification (CMS-HCC) based on our previous work. Considering the large radiometric and geometric differences between heterogeneous images, first, a cooperative multitemporal segmentation method is introduced to generate multi-scale segmentation results. This method segments two images together by associating the information from the two images and thus reduces the noises and errors caused by area transition and object misalignment, as well as makes the boundaries of detected objects described more accurately. Then, a region-based multitemporal hierarchical Markov random field (RMH-MRF) model is defined to combine spatial, temporal, and multi-level information. With the RMH-MRF model, a hierarchical compound classification method is performed by identifying the optimal configuration of labels with a region-based marginal posterior mode estimation, further improving the change detection accuracy. The changes can be determined if the labels assigned to each pair of parcels are different, obtaining multi-scale change maps. Experimental validation is conducted on several pairs of optical and SAR images. It consists of two parts: comparison on different multitemporal segmentation methods and comparison on different change detection methods. The results show that the proposed method can effectively detect the changes in heterogeneous images, with low false positive and high accuracy.
- Research Article
12
- 10.1016/j.ijleo.2020.165876
- Dec 23, 2020
- Optik
Synthetic aperture radar image despeckling with a residual learning of convolutional neural network
- Research Article
143
- 10.1109/tnnls.2021.3056238
- Feb 18, 2021
- IEEE Transactions on Neural Networks and Learning Systems
Change detection based on heterogeneous images, such as optical images and synthetic aperture radar images, is a challenging problem because of their huge appearance differences. To combat this problem, we propose an unsupervised change detection method that contains only a convolutional autoencoder (CAE) for feature extraction and the commonality autoencoder for commonalities exploration. The CAE can eliminate a large part of redundancies in two heterogeneous images and obtain more consistent feature representations. The proposed commonality autoencoder has the ability to discover common features of ground objects between two heterogeneous images by transforming one heterogeneous image representation into another. The unchanged regions with the same ground objects share much more common features than the changed regions. Therefore, the number of common features can indicate changed regions and unchanged regions, and then a difference map can be calculated. At last, the change detection result is generated by applying a segmentation algorithm to the difference map. In our method, the network parameters of the commonality autoencoder are learned by the relevance of unchanged regions instead of the labels. Our experimental results on five real data sets demonstrate the promising performance of the proposed framework compared with several existing approaches.
- Research Article
7
- 10.1109/tgrs.2023.3267480
- Jan 1, 2023
- IEEE Transactions on Geoscience and Remote Sensing
Deep-learning-based target recognition in synthetic aperture radar (SAR) images has been actively studied in recent years. However, it is very costly to collect large numbers of labeled SAR images, especially measured SAR target images of various classes, to train high-performance classification networks. To solve the problem of insufficient SAR data, electromagnetic computational tools have often been developed and used to synthesize the measured SAR target images from data modeling. However, despite the use of sophisticated SAR image modeling, there is a large domain gap between synthetic SAR images and measured images such that networks trained with synthetic SAR images tend to show poor classification performance when tested on measured SAR target images. In this paper, we propose a novel transformer-based synthetic-to-measured SAR target image translation network, referred to as SAR-SMT Net, to bridge the gap between synthetic and measured SAR target images. SAR-SMT Net takes synthetic SAR target images as input and estimates the latent representational features of their corresponding measured SAR images to faithfully adjust the global context and scattering characteristics of the input synthetic SAR target images to the corresponding measured SAR values. In addition, we propose five challenging experimental scenarios that can validate SAR image translation performance outcomes. Experimentally, SAR-SMT Net as proposed here outperforms previous state-of-the-art methods in the experiment scenarios, demonstrating feasible generalization ability when used to translate synthetic SAR target images into their corresponding measured SAR target images with a high level of fidelity, even for unseen target classes at unseen azimuth angles.
- Research Article
20
- 10.3390/rs13234918
- Dec 3, 2021
- Remote Sensing
To solve the problems of susceptibility to image noise, subjectivity of training sample selection, and inefficiency of state-of-the-art change detection methods with heterogeneous images, this study proposes a post-classification change detection method for heterogeneous images with improved training of hierarchical extreme learning machine (HELM). After smoothing the images to suppress noise, a sample selection method is defined to train the HELM for each image, in which the feature extraction is respectively implemented for heterogeneous images and the parameters need not be fine-tuned. Then, the multi-temporal feature maps extracted from the trained HELM are segmented to obtain classification maps and then compared to generate a change map with changed types. The proposed method is validated experimentally by using one set of synthetic aperture radar (SAR) images obtained from Sentinel-1, one set of optical images acquired from Google Earth, and two sets of heterogeneous SAR and optical images. The results show that compared to state-of-the-art change detection methods, the proposed method can improve the accuracy of change detection by more than 8% in terms of the kappa coefficient and greatly reduce run time regardless of the type of images used. Such enhancement reflects the robustness and superiority of the proposed method.
- Research Article
10
- 10.1016/j.patcog.2023.110237
- Dec 30, 2023
- Pattern Recognition
Unsupervised spatial self-similarity difference-based change detection method for multi-source heterogeneous images
- Research Article
1
- 10.5755/j01.eie.26.6.25849
- Dec 18, 2020
- Elektronika ir Elektrotechnika
In this paper, a hybrid classification approach which is combined with a more deep mask region-convolutional neural network and sparsity driven despeckling algorithm is proposed for synthetic aperture radar (SAR) image segmentation instead of the classical segmentation methods. In satellite technology, synthetic aperture radar images are strongly used for a lot of areas, such as evaluating air conditions, determining agricultural fields, climatic changes, and as a target in the military. Synthetic aperture radar images must be segmented to each meaningful point in the image for a quality segmentation process. In contrast, synthetic aperture radar images have a lot of noisy speckles and these speckles should be also reduced for a quality segmentation. Current studies show that deep learning techniques are widely used for segmentation methods. High accuracy and fast results can be obtained with deep learning techniques for image segmentation. Mask region-convolutional neural network can not only separate each meaningful field in the image, but it can also generate a high accuracy prediction for each meaningful field of synthetic aperture radar images. The study shows that smoothed SAR images can be classified as multiple regions with deep neural networks.
- Research Article
5
- 10.3390/rs14112527
- May 25, 2022
- Remote Sensing
Heterogeneous synthetic aperture radar (SAR) images contain more complementary information compared with homologous SAR images; thus, the comprehensive utilization of heterogeneous SAR images could potentially improve performance for the monitoring of sea surface objects, such as sea ice and enteromorpha. Image registration is key to the application of monitoring sea surface objects. Heterogeneous SAR images have intensity differences and resolution differences, and after the uniform resolution, intensity differences are one of the most important factors affecting the image registration accuracy. In addition, sea surface objects have numerous repetitive and confusing features for feature extraction, which also limits the image registration accuracy. In this paper, we propose an improved L2Net network for image registration with intensity differences and repetitive texture features, using sea ice as the research object. The deep learning network can capture feature correlations between image patch pairs, and can obtain the correct matching from a large number of features with repetitive texture. In the SAR image pair, four patches of different sizes centered on the corner points are proposed as inputs. Thus, local features and more global features are fused to obtain excellent structural features, to distinguish between different repetitive textural features, add contextual information, further improve the feature correlation, and improve the accuracy of image registration. An outlier removal strategy is proposed to remove false matches due to repetitive textures. Finally, the effectiveness of our method was verified by comparative experiments.
- Conference Article
- 10.1109/nnsp.1993.471862
- Sep 6, 1993
Two neural networks are combined to detect wakes in synthetic aperture radar (SAR) images of the ocean. The first network detects local wake features in smaller sub-proportions of the image, and the second network integrates the information from the first network to determine the presence or absence of a wake in the entire image. The networks train directly using the gradient descent method on either real SAR images or on synthetic images and are designed to detect wakes in images with low signal-to-noise ratios. When trained on real images, the network detector recognizes the wake in any translation and is robust with respect to rotations. With synthetic images, the network model is able to recognize wakes with all possible translations, rotations and over a wide range of opening angles. >
- Conference Article
7
- 10.1109/igarss46834.2022.9883323
- Jul 17, 2022
Heterogeneous image change detection, in contrast to homogeneous image change detection, has been a research hotspot due to the information complementary of different imaging mechanisms. However, the imaging difference leads to challenges on change detection by image comparison. To address the incomparability among heterogeneous images and improve the efficiency of heterogeneous image change detection, this paper proposes a novel heterogeneous image change detection method based two-stage joint feature learning. Assuming that the change is few and the image differences in unchanged areas between heterogeneous images are related to the imaging and environmental differences, it maps heterogeneous images into a similar feature space for comparison. Firstly, the bi-temporal similar feature maps with high similarity are extracted after joint feature learning of heterogeneous image. And the similar feature maps are used for joint feature learning optimized by a similarity measure in order to map them to an approximate feature space for comparison. Then the change map is obtained by segmenting the difference between the optimal feature maps. The experiments prove its superiority over existing methods on two heterogeneous image datasets (optical and synthetic aperture radar (SAR) images).
- Conference Article
5
- 10.1145/3503161.3548013
- Oct 10, 2022
Nowadays, real data in person re-identification (ReID) task is facing privacy issues, e.g., the banned dataset DukeMTMC-ReID. Thus it becomes much harder to collect real data for ReID task. Meanwhile, the labor cost of labeling ReID data is still very high and further hinders the development of the ReID research. Therefore, many methods turn to generate synthetic images for ReID algorithms as alternatives instead of real images. However, there is an inevitable domain gap between synthetic and real images. In previous methods, the generation process is based on virtual scenes, and their synthetic training data can not be changed according to different target real scenes automatically. To handle this problem, we propose a novel Target-Aware Generation pipeline to produce synthetic person images, called TAGPerson. Specifically, it involves a parameterized rendering method, where the parameters are controllable and can be adjusted according to target scenes. In TAGPerson, we extract information from target scenes and use them to control our parameterized rendering process to generate target-aware synthetic images, which would hold a smaller gap to the real images in the target domain. In our experiments, our target-aware synthetic images can achieve a much higher performance than the generalized synthetic images on MSMT17, i.e. 47.5% vs. 40.9% for rank-1 accuracy. We will release this toolkit\footnote{\noindent Code is available at \href{https://github.com/tagperson/tagperson-blender}{https://github.com/tagperson/tagperson-blender}} for the ReID community to generate synthetic images at any desired taste.
- Research Article
2
- 10.1080/10106049.2024.2329673
- Jan 1, 2024
- Geocarto International
The task of change detection (CD) in optical and SAR images is an ever-evolving and demanding subject within the realm of remote sensing (RS). It holds great significance to identify the target areas by using complementary information between the two. Due to the distinct imaging mechanisms employed by optical and SAR sensors, effectively and accurately identifying changing regions can be challenging. To this end, a novel heterogeneous RS images CD network (Twin-Depthwise Separable Convolution Connect Network, TDSCCNet) is proposed in this paper. Image domain transformation is a front-end task, while the back-end employs a single-branch bilayer depthwise separable convolution-connected encoder-decoder to accomplish CD work. Specifically, first, the cycle-consistent adversarial network (CycleGAN) serves to integrate the optical and SAR visual domains, and a consistent feature expression is obtained. Second, the single-branch encoder structure of bilayer depthwise separable convolution is employed to realize change feature extraction. Finally, the multiscale connected decoder reconstructed by change map is utilized to reconstruct the original images and solve the local discontinuities and holes in the binary change map. Multiscale loss is designated for optimizing the global and local effects to alleviate the class imbalance problem. It was tested on four representative datasets from Gloucester, Shuguang Village, Italy and WV-3 datasets with an overall accuracy of 97.36%, 97.01%, 97.62%, and 98.01% respectively. By comparing the existing methods, experimental results confirmed the effectiveness of the proposed method.
- Research Article
7
- 10.3390/rs15020330
- Jan 5, 2023
- Remote Sensing
As an active microwave coherent imaging technology, synthetic aperture radar (SAR) images suffer from severe speckle noise and low-resolution problems due to the limitations of the imaging system, which cause difficulties in image interpretation and target detection. However, the existing SAR super-resolution (SR) methods usually reconstruct the images by a determined degradation model and hardly consider multiplicative speckle noise, meanwhile, most SR models are trained with synthetic datasets in which the low-resolution (LR) images are down-sampled from their high-resolution (HR) counterparts. These constraints cause a serious domain gap between the synthetic and real SAR images. To solve the above problems, this paper proposes an unsupervised blind SR method for SAR images by introducing SAR priors in a cycle-GAN framework. First, a learnable probabilistic degradation model combined with SAR noise priors was presented to satisfy various SAR images produced from different platforms. Then, a degradation model and a SR model in a unified cycle-GAN framework were trained simultaneously to learn the intrinsic relationship between HR–LR domains. The model was trained with real LR and HR SAR images instead of synthetic paired images to conquer the domain gap. Finally, experimental results on both synthetic and real SAR images demonstrated the high performance of the proposed method in terms of image quality and visual perception. Additionally, we found the proposed SR method demonstrates the tremendous potential for target detection tasks by reducing missed detection and false alarms significantly.
- Conference Article
- 10.1117/12.548775
- Sep 2, 2004
This paper presents an algorithm for the automatic georegistration of electro-optical (EO) and synthetic aperture radar (SAR) imagery intelligence (IMINT). The algorithm uses a scene reference model in a global coordinate frame to register the incoming IMINT, or mission image. Auxiliary data from the mission image and this model predict a synthetic reference image of a scene at the same collection geometry as the mission image. This synthetic image provides a traceback structure relating the synthetic reference image to the scene model. A correlation matching technique is used to register the mission image to the synthetic reference image. Once the matching has been completed, mission image pixels can be transformed into the corresponding synthetic reference image. Using the traceback structure associated with the synthetic reference image, these pixels can then be transformed into the scene model space. Since the scene model space exists in a global coordinate frame, the mission image has been georegistered. This algorithm is called Prediction-Based Registration (PBR). There are a number of advantages to the PBR approach. First, the transformation from image space to scene model space is computed as a 3D to 2D transformation. This avoids solving the ill-posed problem of directly transforming a 2D image into 3D space. The generation of a synthetic reference simplifies the image matching process by creating the synthetic reference at the same geometry as the mission image. Further, dissimilar sensor phenomenologies are accounted for by using the appropriate sensor model. This allows sensor platform and image formation errors to be accounted for in their own domain when multiple sensors are being registered.
- Conference Article
- 10.1109/itaic.2019.8785852
- May 1, 2019
In the classification detection of Synthetic Aperture Radar (SAR) images, there are SAR images acquired from different radars. For the problem of ground objects classification detection of less-label heterogeneous SAR images, this paper uses the Convolutional Neural Network (CNN) based on transfer learning to train a pre-trained network with fewer labels. To fine-tune the network, we can realize the ground object classification detection of heterogeneous SAR images. The effectiveness of the proposed method is verified by experiments.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.