KDGraph: A Keypoint Detection Method for Road Graph Extraction From Remote Sensing Images
KDGraph: A Keypoint Detection Method for Road Graph Extraction From Remote Sensing Images
- Research Article
6
- 10.19184/geosi.v3i2.7934
- Aug 28, 2018
- Geosfera Indonesia
AN ASSESSMENT OF SPATIAL VARIATION OF LAND SURFACE CHARACTERISTICS OF MINNA, NIGER STATE NIGERIA FOR SUSTAINABLE URBANIZATION USING GEOSPATIAL TECHNIQUES
- Research Article
62
- 10.3390/rs10050711
- May 4, 2018
- Remote Sensing
Nowadays, our ability to acquire remote sensing data has been improved to an unprecedented level.[...]
- Research Article
- 10.3233/jcm-226604
- Apr 4, 2023
- Journal of Computational Methods in Sciences and Engineering
The feature extraction of Gaofen-2 Remote Sensing Image (RSI) has problems such as poor extraction accuracy and large noise reduction error. Therefore, this paper designs an RSI feature extraction method based on high score 2 wavelet transform (WT). The RSI is collected with the help of Gaofen-2 satellite and high-resolution remote sensing technology, the key points of the image are determined through the Gaussian difference scale space, and the key points of the edge are judged by the peak curvature value of the difference function at the edge junction, so as to complete the RSI acquisition. Specific filtering and spatial domain transformations are used to remove image noise and improve the quality of RSI. The mean shift (MS) algorithm is used to iteratively find the area with the most dense sample points in the RSI space, complete the image analysis, and realize the preprocessing of the high score 2 RSI. The linear features of the RSI are determined by the WT algorithm, and the image threshold is set for feature extraction of the high score 2 RSI. The experimental results show that in the RSI noise reduction error analysis of different methods, the noise reduction error curve of the sample RSI of the method proposed in this paper has the lowest trend, which is always lower than 2%. Compared with the two methods proposed before, the error is higher. At the same time, in the accuracy analysis of key point feature extraction, the proposed scheme has better accuracy. Therefore, it can be seen that this method has better comprehensive performance, and the proposed method can effectively improve the feature extraction accuracy of RSI and reduce the noise in RSI.
- Research Article
7
- 10.1155/2020/2725186
- Mar 23, 2020
- Journal of Spectroscopy
In order to improve the change detection accuracy of multitemporal high spatial resolution remote-sensing (HSRRS) images, a change detection method of multitemporal remote-sensing images based on saliency detection and spatial intuitionistic fuzzy C-means (SIFCM) clustering is proposed. Firstly, the cluster-based saliency cue method is used to obtain the saliency maps of two temporal remote-sensing images; then, the saliency difference is obtained by subtracting the saliency maps of two temporal remote-sensing images; finally, the SIFCM clustering algorithm is used to classify the saliency difference image to obtain the change regions and unchange regions. Two data sets of multitemporal high spatial resolution remote-sensing images are selected as the experimental data. The detection accuracy of the proposed method is 96.17% and 97.89%. The results show that the proposed method is a feasible and better performance multitemporal remote-sensing image change detection method.
- Book Chapter
- 10.58532/v2bs16ch13
- Nov 30, 2023
This chapter presents a novel approach for man-made object extraction in Remote Sensing (RS) images. This paper focuses on the design and implementation of a system that allows a user to extract multiple objects such as buildings or roads from an input image without much user intervention. The framework includes five main stages: 1) Pre-processing Stage. 2) Extraction of Local energy features using edge information and Gabor filter followed by down sampling to reduce the redundant information. 3) Further reduction of the size of feature vectors using Wavelet decomposition. 4) Classification and recognition of man-made structures using Probabilistic Neural Network (PNN) 5) NDVI based postclassification refinement. Experiments are carried out on a dataset of 200 RS images. The proposed framework yields Overall Accuracy (OA) of 93%. Experimental results validate the effective performance of the suggested method for manmade objects extraction from RS images. Compared with other methods; the proposed framework exhibits significantly improved accuracy results and computationally much more efficient. Most notably, it has a much smaller input size, which makes it more feasible in practical application
- Research Article
149
- 10.1016/j.eswa.2022.116793
- Mar 2, 2022
- Expert Systems with Applications
Remote sensing image super-resolution and object detection: Benchmark and state of the art
- Conference Article
1
- 10.2991/icectt-15.2015.74
- Jan 1, 2015
Keywords: Gaussian process classification, remote sensing image, coastline extraction. Abstract. For more rapid and accurate coastline extraction methods, this paper presents a method of coastline extraction on remote sensing image using Gaussian process classification and then the coastline in coastal waters of Beihai city in Guangxi has been extracted. The results show that coastline extraction on remote sensing image using Gaussian process classification overcomes a series of public problems about coastline extraction that over-learning and the limitations of poor generalization ability in small sample exist in Artificial Neural Network and hyper-parameters are difficult to choose for Support Vector Machine. At last, the success of coastline extraction provides a efficient computing method for measurement and identification of coastline.
- Conference Article
- 10.1117/12.2193416
- Aug 5, 2015
The automatic generation of seamline along the overlap region skeleton is a concerning problem for the mosaicking of Remote Sensing(RS) images. Along with the improvement of RS image resolution, it is necessary to ensure rapid and accurate processing under complex conditions. So an automated seamline detection method for RS image mosaicking based on image object and overlap region contour contraction is introduced. By this means we can ensure universality and efficiency of mosaicking. The experiments also show that this method can select seamline of RS images with great speed and high accuracy over arbitrary overlap regions, and realize RS image rapid mosaicking in surveying and mapping production.
- Research Article
22
- 10.32604/cmc.2022.025118
- Jan 1, 2022
- Computers, Materials & Continua
The mission of classifying remote sensing pictures based on their contents has a range of applications in a variety of areas. In recent years, a lot of interest has been generated in researching remote sensing image scene classification. Remote sensing image scene retrieval, and scene-driven remote sensing image object identification are included in the Remote sensing image scene understanding (RSISU) research. In the last several years, the number of deep learning (DL) methods that have emerged has caused the creation of new approaches to remote sensing image classification to gain major breakthroughs, providing new research and development possibilities for RS image classification. A new network called Pass Over (POEP) is proposed that utilizes both feature learning and end-to-end learning to solve the problem of picture scene comprehension using remote sensing imagery (RSISU). This article presents a method that combines feature fusion and extraction methods with classification algorithms for remote sensing for scene categorization. The benefits (POEP) include two advantages. The multi-resolution feature mapping is done first, using the POEP connections, and combines the several resolution-specific feature maps generated by the CNN, resulting in critical advantages for addressing the variation in RSISU data sets. Secondly, we are able to use Enhanced pooling to make the most use of the multi-resolution feature maps that include second-order information. This enables CNNs to better cope with (RSISU) issues by providing more representative feature learning. The data for this paper is stored in a UCI dataset with 21 types of pictures. In the beginning, the picture was pre-processed, then the features were retrieved using RESNET-50, Alexnet, and VGG-16 integration of architectures. After characteristics have been amalgamated and sent to the attention layer, after this characteristic has been fused, the process of classifying the data will take place. We utilize an ensemble classifier in our classification algorithm that utilizes the architecture of a Decision Tree and a Random Forest. Once the optimum findings have been found via performance analysis and comparison analysis.
- Research Article
70
- 10.4236/ars.2015.43016
- Jan 1, 2015
- Advances in Remote Sensing
Water on the Earth’s surface is an essential part of the hydrological cycle. Water resources include surface waters, groundwater, lakes, inland waters, rivers, coastal waters, and aquifers. Monitoring lake dynamics is critical to favor sustainable management of water resources on Earth. In cryosphere, lake ice cover is a robust indicator of local climate variability and change. Therefore, it is necessary to review recent methods, technologies, and satellite sensors employed for the extraction of lakes from satellite imagery. The present review focuses on the comprehensive evaluation of existing methods for extraction of lake or water body features from remotely sensed optical data. We summarize pixel-based, object-based, hybrid, spectral index based, target and spectral matching methods employed in extracting lake features in urban and cryospheric environments. To our knowledge, almost all of the published research studies on the extraction of surface lakes in cryospheric environments have essentially used satellite remote sensing data and geospatial methods. Satellite sensors of varying spatial, temporal and spectral resolutions have been used to extract and analyze the information regarding surface water. Multispectral remote sensing has been widely utilized in cryospheric studies and has employed a variety of electro-optical satellite sensor systems for characterization and extraction of various cryospheric features, such as glaciers, sea ice, lakes and rivers, the extent of snow and ice, and icebergs. It is apparent that the most common methods for extracting water bodies use single band-based threshold methods, spectral index ratio (SIR)-based multiband methods, image segmentation methods, spectral-matching methods, and target detection methods (unsupervised, supervised and hybrid). A Synergetic fusion of various remote sensing methods is also proposed to improve water information extraction accuracies. The methods developed so far are not generic rather they are specific to either the location or satellite imagery or to the type of the feature to be extracted. Lots of factors are responsible for leading to inaccurate results of lake-feature extraction in cryospheric regions, e.g. the mountain shadow which also appears as a dark pixel is often misclassified as an open lake. The methods which are working well in the cryospheric environment for feature extraction or landcover classification does not really guarantee that they will be working in the same manner for the urban environment. Thus, in coming years, it is expected that much of the work will be done on object-based approach or hybrid approach involving both pixel as well as object-based technology. A more accurate, versatile and robust method is necessary to be developed that would work independent of geographical location (for both urban and cryosphere) and type of optical sensor.
- Research Article
1
- 10.3390/rs16142645
- Jul 19, 2024
- Remote Sensing
With the continuous development of space remote sensing technology, the spatial resolution of visible remote sensing images has been continuously improved, which has promoted the progress of remote sensing target detection. However, due to the limitation of sensor lattice size, it is still challenging to obtain a large range of high-resolution (HR) remote sensing images in practical applications, which makes it difficult to carry out target monitoring in a large range of areas. At present, many object detection methods focus on the detection and positioning technology of HR remote sensing images, but there are relatively few studies on object detection methods using medium- and low-resolution (M-LR) remote sensing images. Because of its wide coverage area and short observation period, M-LR remote sensing imagery is of great significance for obtaining information quickly in space applications. However, the small amount of fine-texture information on objects in M-LR images brings great challenges to detection and recognition tasks. Therefore, we propose a small target detection method based on degradation reconstruction, named DRADNet. Different from the previous methods that use super resolution as a pre-processing step and then directly input the image into the detector, we have designed an additional degenerate reconstruction-assisted framework to effectively improve the detector’s performance in detection tasks with M-LR remote sensing images. In addition, we introduce a hybrid parallel-attention feature fusion module in the detector to achieve focused attention on target features and suppress redundant complex backgrounds, thus improving the accuracy of the model in small target localization. The experimental results are based on the widely used VEDAI dataset and Airbus-Ships dataset, and verify the effectiveness of our method in the detection of small- and medium-sized targets in M-LR remote sensing images.
- Research Article
5
- 10.3390/rs15184423
- Sep 8, 2023
- Remote Sensing
Satellite remote sensing provides an effective technical means for the precise extraction of information on aquacultural areas, which is of great significance in realizing the scientific supervision of the aquaculture industry. Existing optical remote sensing methods for the extraction of aquacultural area information mostly focus on the use of image spatial features and research on classification methods of single aquaculture patterns. Accordingly, the comprehensive utilization of a combination of spectral information and deep learning automatic recognition technology in the feature expression and discriminant extraction of aquaculture areas needs to be further explored. In this study, using Sentinel-2 remote sensing images, a method for the accurate extraction of different algae aquaculture zones combined with spectral information and deep learning technology was proposed for the characteristics of small samples, multidimensions, and complex water components in marine aquacultural areas. First, the feature expression ability of the aquaculture area target was enhanced through the calculation of the normalized difference aquaculture water index (NDAWI). Second, on this basis, the improved deep convolution generative adversarial network (DCGAN) algorithm was used to amplify the samples and create the NDAWI dataset. Finally, three semantic segmentation methods (UNet, DeepLabv3, and SegNet) were used to design models for classifying the algal aquaculture zones based on the sample amplified time series dataset and comprehensively compare the accuracy of the model classifications for achieving accurate extraction of different algal aquaculture information within the seawater aquaculture zones. The results show that the improved DCGAN amplification exhibited a better effect than the generative adversarial networks (GANs) and DCGAN under the indexes of structural similarity (SSIM) and peak signal-to-noise ratio (PSNR). The UNet classification model constructed on the basis of the improved DCGAN-amplified NDAWI dataset achieved better classification results (Lvshunkou: OA = 94.56%, kappa = 0.905; Jinzhou: OA = 94.68%, kappa = 0.913). The algorithmic model in this study provides a new method for the fine classification of marine aquaculture area information under small sample conditions.
- Research Article
6
- 10.3390/rs14051235
- Mar 2, 2022
- Remote Sensing
Due to the inconsistent spatiotemporal spectral scales, a remote sensing dataset over a large-scale area and over long-term time series will have large variations and large statistical distribution features, which will lead to a performance drop of the deep learning model that is only trained on the source domain. For building an extraction task, deep learning methods perform weak generalization from the source domain to the other domain. To solve the problem, we propose a Capsule–Encoder–Decoder model. We use a vector named capsule to store the characteristics of the building and its parts. In our work, the encoder extracts capsules from remote sensing images. Capsules contain the information of the buildings’ parts. Additionally, the decoder calculates the relationship between the target building and its parts. The decoder corrects the buildings’ distribution and up-samples them to extract target buildings. Using remote sensing images in the lower Yellow River as the source dataset, building extraction experiments were trained on both our method and the mainstream methods. Compared with the mainstream methods on the source dataset, our method achieves convergence faster, and our method shows higher accuracy. Significantly, without fine tuning, our method can reduce the error rates of building extraction results on an almost unfamiliar dataset. The building parts’ distribution in capsules has high-level semantic information, and capsules can describe the characteristics of buildings more comprehensively, which are more explanatory. The results prove that our method can not only effectively extract buildings but also perform great generalization from the source remote sensing dataset to another.
- Research Article
11
- 10.1142/s0218001420540154
- Sep 4, 2019
- International Journal of Pattern Recognition and Artificial Intelligence
Target recognition is an important application in the time of high-resolution remote sensing images. However, the traditional target recognition method has the characteristics of artificial design, and the generalization ability is not strong, which makes it difficult to meet the requirement of the current mass data. Therefore, it is urgent to explore new methods for feature extraction and target recognition and location in remote sensing images. Convolutional neural network in deep learning can extract representative and discriminative multi-level features of typical features from images, so it can be used for multi-target recognition of remote sensing big data in complex scenes. In this study, NWPU VHR-10 data was selected, 50% was used for training, and the remainder was used for verification. The target recognition effects of two kinds of convolutional neural network models, Faster R-CNN and SSD, were studied and compared, and the mean average precision (mAP) was used for evaluation. The evaluation results show that the Faster R-CNN has three categories with an accuracy of more than 80%, and the SSD has seven categories with an accuracy of more than 80%, all of which show good results. The SSD model is particularly prominent in running time and recognition results, which proves convolutional neural networks have broad application prospects in the target recognition of remote sensing image data.
- Research Article
12
- 10.1016/j.apor.2023.103702
- Aug 15, 2023
- Applied Ocean Research
Ship detection in haze and low-light remote sensing images via colour balance and DCNN
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.