River Area Segmentation Using Sentinel-1 SAR Imagery with Deep-Learning Approach

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

River segmentation is important in delivering essential information for environmental analytics such as water management, flood/disaster management, observations of climate change, or human activities. Advances in remote-sensing technology have provided more complex features that limit the traditional approaches’ effectiveness. This work uses deep-learning-based models to enhance river extractions from satellite imagery. With Resnet-50 as the backbone network, CNN U-Net and DeepLabv3+ were utilized to perform the river segmentation of the Sentinel-1 C-Band synthetic aperture radar (SAR) imagery. The SAR data was selected due to its capability to capture surface details regardless of weather conditions, with VV+VH band polarizations being employed to improve water surface reflectivity. A total of 1080 images were utilized to train and test the models. The models’ performance was measured using the Dice coefficient. The CNN U-Net architecture achieved an accuracy of 0.94, while DeepLabv3+ attained an accuracy of 0.92. Although DeepLabv3+ showed more stability during the training and performed better on wider rivers, CNN U-Net excelled at identifying narrow rivers. In conclusion, a river-segmentation model was conducted using Sentinel-1 C-Band SAR data, with CNN U-Net outperforming DeepLabv3+; this enabled detailed river mapping for irrigationand flood-monitoring applications – particularly in cloud-prone tropical regions.

Similar Papers
  • Conference Article
  • Cite Count Icon 6
  • 10.1109/igarss47720.2021.9553448
Exploring the Fusion of Sentinel-1 SAR and Sentinel-2 MSI Data for Built-Up Area Mapping Using Deep Learning
  • Jul 11, 2021
  • Sebastian Hafner + 2 more

This research explores the potential of combining Sentinel-1 C-band Synthetic Aperture Radar (SAR) and Sentinel-2 MultiSpectral Instrument (MSI) data for Built-Up Area (BUA) mapping using deep learning. A lightweight U-Net model is trained using openly available building footprint reference data in North America and tested in four cities across three additional continents. The best test performance in terms of F1 score was achieved by the joint use of SAR and multispectral data (0.676), followed by multi-spectral (0.611) and SAR data (0.601). The developed fusion approach is particularly promising to distinguish BUA in low-density residential neighborhoods. Furthermore, our fusion approach compares favorably to the state-of-the-art in BUA mapping in the selected cities. However, associated with the diverse characteristics of human settlements around the world, considerable differences in accuracy among the test cities were observed. This indicates the need for more sophisticated fusion techniques to improve CNN model generalization and for adding more diverse training data.

  • Research Article
  • 10.1007/s43621-025-00843-4
Urban waterlogging vulnerability assess using SAR imagery and integrated terrain analysis
  • Jan 24, 2025
  • Discover Sustainability
  • R J Jerin Joe + 3 more

Waterlogging is a significant concern in urban areas and can result in severe disruptions and damage and it’s an urban problem. This study is conducted in Thoothukudi and Tamil Nadu, which are particularly sensitive to waterlogging because of their geographical and meteorological circumstances. Using synthetic aperture radar (SAR) images from 2015 to 2022, topographical analysis, land use/land cover (LULC) data, and geological insights, this research intends to identify and assess areas prone to water logging. The data source for this study comprises rainfall records from the Indian Meteorological Department (IMD), Sentinel-1 SAR imagery, Sentinel-2 multispectral images from the European Space Agency (ESA), and the Shuttle Radar Topography Mission (SRTM) Digital Elevation Model (DEM).Terrain analysis was undertaken using DEM to generate elevation, slope, and aspect maps, while SAR data were processed to extract water pixels, which included the extraction of water pixels from SAR data for each year and overlaying them. The overlaid image was correlated with topographic maps to identify the high-risk regions. Key places such as Muthayapuram, Milavittan, Bryant Nagar, and Thalamuthunagar were constantly highlighted as prone to floods. Additionally, the saltpan regions, defined by low-lying water table levels, endure continuous flooding, demonstrating the usefulness of combining SAR imaging with topographic analysis for urban water management. These findings provide useful insights for urban planners and policymakers, underlining the need for deliberate steps to reduce waterlogging, maintain public health, and minimize infrastructure damage, thus enabling sustainable development in Thoothukudi.

  • Preprint Article
  • Cite Count Icon 2
  • 10.5194/egusphere-egu2020-5305
National-scale mangrove forest mapping by using Sentinel-1 SAR and Sentinel-2 MSI imagery on the Google Earth Engine Platform
  • Mar 23, 2020
  • Luojia Hu + 3 more

<p>Mangrove forest is considered as one of the pivotal ecosystems to near-shore environment health, adjacent terrestrial ecosystems and even global climate change migration. However, for past two decades, they are declining rapidly. In order to take effective steps to prevent the extinction of mangroves, high spatial resolution information of large-scale mangrove distribution is urgent. Recent study has indicated that a suitable pixel size for extracting mangroves should be at least equal to 10 m. Hence, Sentinel imagery (Sentinel-1 C-band synthetic aperture radar (SAR) and Sentinel-2 Multi-Spectral Instrument (MSI) imagery) whose spatial resolution is 10 m may hold great potentials to achieve this goal, but there are limited researches investigating it. Therefore, in this study, we will explore the potential of Sentinel imagery to extract mangrove forests in China on the Google Earth Engine platform. Specifically, our study was mainly conducted around 3 questions: (1) Which Sentinel imagery provides a higher accuracy for mangrove forest mapping, Sentinel-1 SAR data or Sentinel-2 multi-spectral data? (2) which combination of features from Sentinel imagery provides the most accurate mangrove forest map? (3) Compared to 30-m resolution mangrove products derived from Landsat imagery, how does 10-m resolution map improve our knowledge about the distribution of mangrove forest in China?</p><p> </p><p>Our results show that: (1) The highest producer’s accuracies (the reason why using producer’s accuracy as an accuracy evaluation indicator here is that the omission errors in mangrove forest extent map are much larger than commission errors) of mangrove forest maps derived from Sentinel-1 and Sentinel-2 imagery are 91.76% and 90.39%, respectively, which means that the contributions of Sentinel-1 SAR and Sentinel-2 MSI imagery to mangrove mapping are similar; (2) The highest producer’s accuracy of mangrove forest map at 10-m resolution is 95.4%. The mangrove forest map with the highest accuracy is obtained by combining quantiles of spectral and backscatter bands, spectral index, and texture index derived from time series of Sentinel-1 and Sentinel-2 imagery, indicating that the combination of Sentinel-1 SAR and Sentinel-2 MSI imagery is more useful in mangrove forest mapping than using them separately; (3) In China, the total area of mangrove forest extent at 10-m resolution is similar to that at 30-m resolution (20003 ha vs. 19220 ha). However, compared to 30-m resolution mangrove products, the 10-m resolution mangrove map identifies 1741 ha (occupying 8.7% of total mangrove forest area in China) mangrove forests in size smaller than 1 ha, which are especially important to low-lying coastal zone. This study demonstrates the feasibility of Sentinel imagery in large-scale mangrove forest mapping and gives guidance to map global mangrove forest at 10-m resolution in the future.  </p><p> </p>

  • Research Article
  • Cite Count Icon 3
  • 10.1080/01431161.2021.1995074
Deep convolutional neural network with random field model for lake ice mapping from Sentinel-1 imagery
  • Nov 14, 2021
  • International Journal of Remote Sensing
  • Zhiguo Ma + 9 more

Timely information on lake ice cover conditions is critical in support of commercial shipping, winter-road transportation, and winter leisure activities such as snowmobiling and ice fishing. Monitoring of lake ice extent and ice phenology (i.e. dates associated with freeze-up/break-up and ice duration) is also valuable for improving numerical weather prediction (NWP) and for climate monitoring. The availability of free synthetic aperture radar (SAR) data from the European Space Agency’s Sentinel-1 A/B constellation provides an unprecedented opportunity to develop operational algorithms for lake ice cover mapping at a temporal frequency not available until now from SAR missions (ca. every 1–5 days depending on latitude). For NWP, rapid and accurate mapping of ice cover and open water areas in lake-rich regions is required. However, the classification of SAR imagery using traditional machine learning (ML) approaches is challenging due to the large data volume generated by imaging systems such as Sentinel-1 SAR as well as the complexity of radar signatures as a function of sensor and target characteristics. For lakes specifically, radar backscatter can vary greatly with incidence angle, polarization and surface properties (e.g. calm and wind-roughed open water, new thin ice, surface roughness due to the presence of pressure ridges, ice type, melting of ice and on-ice snow). To address the challenge of large-scale lake ice mapping, we investigate the GPU-boosted deep neural network approach that is efficient at handling big complex data. In this paper, we design a novel maximum a posterior (MAP) approach combining a convolutional neural network (CNN) and conditional random field (CRF) for better addressing the challenges of operational lake ice mapping. The proposed approach is tested on 17 Sentinel-1 dual-polarization (VV and VH) SAR images, where eight are used for training and validation and nine used for out-of-sample testing. To identify the optimal network architecture, an independent validation set is used to evaluate the performance of a series of six CNNs with increasing model complexity. The best model overcomes the noise effect in Sentinel-1 SAR imagery and the lake ice signature ambiguity issue; it achieves average classification accuracies of 97.10% and 97.14% for open water and ice, respectively, on the validation set. Moreover, the best model outperforms the last-ranked model by about 2% in terms of mean overall accuracy (OA), demonstrating the improvement and importance of model selection. The use of CRF can consistently improve the CNN by reducing the artefacts and noise effect in ice maps, outperforming the CNN model when used alone by about 3% in terms of mean OA on the validation set. The proposed CNN-CRF approach also achieves high accuracy on the nine test scenes, achieving a mean OA of 93.10%, demonstrating strong generalization capability that is important for SAR lake ice mapping.

  • Research Article
  • Cite Count Icon 24
  • 10.3390/rs10091367
An Empirical Algorithm to Retrieve Significant Wave Height from Sentinel-1 Synthetic Aperture Radar Imagery Collected under Cyclonic Conditions
  • Aug 28, 2018
  • Remote Sensing
  • Weizeng Shao + 6 more

In this study, an empirical algorithm is proposed to retrieve significant wave height (SWH) from dual-polarization Sentinel-1 (S-1) synthetic aperture radar (SAR) imagery collected under cyclonic conditions. The retrieval scheme is based on the well-known CWAVE empirical function that is here updated to deal with multi-polarization S-1 SAR measurements collected using the interferometric wide (IW) and the Extra Wide-Swath (EW) imaging modes, under cyclonic conditions. First, a training dataset that consists of six S-1 SAR images collected under cyclonic conditions is exploited to both tune the retrieval function and to check the soundness of the retrievals against the co-located WAVEWATCH-III (WW3) numerical simulations. The comparison of simulation from the WW3 model and measurements from altimeter Jason-2 shows a 0.29m root mean square error (RMSE) of significant wave height (SWH). Then, a testing data-set that consists of two S-1 SAR images is exploited to provide a preliminary validation. The results, verified against both WW3 and European Centre for Medium-Range Weather Forecasts (ECMWF) data, show the soundness of the herein approach.

  • Research Article
  • Cite Count Icon 10
  • 10.1186/s40068-023-00324-5
Fusion of sentinel-1 SAR and sentinel-2 MSI data for accurate Urban land use-land cover classification in Gondar City, Ethiopia
  • Nov 28, 2023
  • Environmental Systems Research
  • Shimelis Sishah Dagne + 4 more

Effective urban planning and management rely on accurate land cover mapping, which can be achieved through the combination of remote sensing data and machine learning algorithms. This study aimed to explore and demonstrate the potential benefits of integrating Sentinel-1 SAR and Sentinel-2 MSI satellite imagery for urban land cover classification in Gondar city, Ethiopia. Synthetic Aperture Radar (SAR) data from Sentinel-1A and Multispectral Instrument (MSI) data from Sentinel-2B for the year 2023 were utilized for this research work. Support Vector Machine (SVM) and Random Forest (RF) machine learning algorithms were utilized for the classification process. Google Earth Engine (GEE) was used for the processing, classification, and validation of the remote sensing data. The findings of the research provided valuable insights into the performance evaluation of the Support Vector Machine (SVM) and Random Forest (RF) algorithms for image classification using different datasets, namely Sentinel 2B Multispectral Instrument (MSI) and Sentinel 1A Synthetic Aperture Radar (SAR) data. When applied to the Sentinel 2B MSI dataset, both SVM and RF achieved an overall accuracy (OA) of 0.69, with a moderate level of agreement indicated by the Kappa score of 0.357. For the Sentinel 1A SAR data, SVM maintained the same OA of 0.69 but showed an improved Kappa score of 0.67, indicating its suitability for SAR image classification. In contrast, RF achieved a slightly lower OA of 0.66 with Sentinel 1A SAR data. However, when the datasets of Sentinel 2B MSI and Sentinel 1A SAR were combined, SVM achieved an impressive OA of 0.91 with a high Kappa score of 0.80, while RF achieved an OA of 0.81 with a Kappa score of 0.809. These findings highlight the potential of fusing satellite data from multiple sources to enhance the accuracy and effectiveness of image classification algorithms, making them valuable tools for various applications, including land use mapping and environmental monitoring.

  • Research Article
  • 10.3390/rs17122031
Tracking Post-Fire Vegetation Regrowth and Burned Areas Using Bitemporal Sentinel-1 SAR Data: A Google Earth Engine Approach in Heath Vegetation of Mooloolah River National Park, Queensland, Australia
  • Jun 12, 2025
  • Remote Sensing
  • Harikesh Singh + 3 more

This study utilizes the unique capabilities of Sentinel-1 C-band synthetic aperture radar (SAR) data to map post-fire burned areas and monitor vegetation recovery in a heath-dominated Queensland National Park. Sentinel-1 SAR data were used due to their cloud-penetrating capability and frequent revisit times. Using Google Earth Engine (GEE), a bitemporal ratio analysis was applied to SAR data from post-fire periods between 2021 and 2023. SAR backscatter changes over time captured fire impacts and subsequent vegetation regrowth. This differentiation was further enhanced with k-means clustering. Validation was supported by Sentinel-2 dNBR and official fire history records. The dNBR provided a quantitative assessment of burn severity and was used alongside the fire history data to evaluate the accuracy of the burned area classification. While Sentinel-2 false-colour composite (FCC) imagery was generated for visualisation and interpretation purposes, the primary validation relied on dNBR and QPWS fire history records. The results highlighted significant vegetation regrowth, with some areas returning to near pre-fire biomass levels by March 2023. This approach demonstrates the sensitivity of Sentinel-1 SAR, especially in VV polarization, for detecting subtle changes in vegetation, providing a cost-effective method for post-fire ecosystem monitoring and informing ecological management strategies amid increasing wildfire events.

  • Conference Article
  • 10.1117/12.2230385
Performance evaluation of SAR/GMTI algorithms
  • Jul 27, 2016
  • David Sobota + 5 more

There is a history and understanding of exploiting moving targets within ground moving target indicator (GMTI) data, including methods for modeling performance. However, many assumptions valid for GMTI processing are invalid for synthetic aperture radar (SAR) data. For example, traditional GMTI processing assumes targets are exo-clutter and a system that uses a GMTI waveform, i.e. low bandwidth (BW) and low pulse repetition frequency (PRF). Conversely, SAR imagery is typically formed to focus data at zero Doppler and requires high BW and high PRF. Therefore, many of the techniques used in performance estimation of GMTI systems are not valid for SAR data. However, as demonstrated by papers in the recent literature, 1-11 there is interest in exploiting moving targets within SAR data. The techniques employed vary widely, including filter banks to form images at multiple Dopplers, performing smear detection, and attempting to address the issue through waveform design. The above work validates the need for moving target exploitation in SAR data, but it does not represent a theory allowing for the prediction or bounding of performance. This work develops an approach to estimate and/or bound performance for moving target exploitation specific to SAR data. Synthetic SAR data is generated across a range of sensor, environment, and target parameters to test the exploitation algorithms under specific conditions. This provides a design tool allowing radar systems to be tuned for specific moving target exploitation applications. In summary, we derive a set of rules that bound the performance of specific moving target exploitation algorithms under variable operating conditions.

  • Research Article
  • Cite Count Icon 2
  • 10.1080/07038992.2019.1583096
The Impact of Variability in SAR Satellite Imagery on Classification
  • Mar 4, 2019
  • Canadian Journal of Remote Sensing
  • Katerina Biron + 1 more

Artificial intelligence (AI) can be a useful tool to gather intelligence from remote sensing data; it helps make sense of synthetic aperture radar (SAR) data via discovery and exploitation. The challenge of utilizing AI in SAR applications is obtaining (large enough) comprehensive sets of labeled training data because SAR data has significant variation across sensor-related characteristics, across processing parameters, and across the different collection plans. This work evaluates the impact of SAR satellite imagery variations on classification accuracy, and demonstrates this by classifying pixels of SAR imagery into land, water, and ship for varying conditions (area-of-interest, incidence angle, spatial resolution, etc.). Results showed that variations in the area-of-interest (AOI), incidence angle, and spatial resolution impacted the classification results obtained using an artificial neural network (ANN). This work also demonstrated that ANNs trained on SAR imagery can be used to infer training data labels of other SAR imagery obtained from different conditions, provided that the changes in condition produced less than a 5% classification error or increased class separation for some (or all) of the classes being discriminated.

  • Research Article
  • Cite Count Icon 86
  • 10.5589/m03-014
Synergy of multitemporal ERS-1 SAR and Landsat TM data for classification of agricultural crops
  • Jan 1, 2003
  • Canadian Journal of Remote Sensing
  • Yifang Ban

The objective of this research was to evaluate the synergistic effects of multitemporal European remote sensing satellite 1 (ERS-1) synthetic aperture radar (SAR) and Landsat thematic mapper (TM) data for crop classification using a per-field artificial neural network (ANN) approach. Eight crop types and conditions were identified: winter wheat, corn (good growth), corn (poor growth), soybeans (good growth), soybeans (poor growth), barley/oats, alfalfa, and pasture. With the per-field approach using a feed-forward ANN, the overall classification accuracy of three-date early- to mid-season SAR data improved almost 20%, and the best classification of a single-date (5 August) SAR image improved the overall accuracy by about 26%, in comparison to a per-pixel maximum-likelihood classifier (MLC). Both single-date and multitemporal SAR data demonstrated their abilities to discriminate certain crops in the early and mid-season; however, these overall classification accuracies (<60%) were not sufficiently high for operational crop inventory and analysis, as the single-parameter, high-incidence-angle ERS-1 SAR system does not provide sufficient differences for eight crop types and conditions. The synergy of TM3, TM4, and TM5 images acquired on 6 August and SAR data acquired on 5 August yielded the best per-field ANN classification of 96.8% (kappa coefficient = 0.96). It represents an 8.3% improvement over TM3, TM4, and TM5 classification alone and a 5% improvement over the per-pixel classification of TM and 5 August SAR data. These results clearly demonstrated that the synergy of TM and SAR data is superior to that of a single sensor and the ANN is more robust than MLC for per-field classification. The second-best classification accuracy of 95.9% was achieved using the combination of TM3, TM4, TM5, and 24 July SAR data. The combination of TM3, TM4, and TM5 images and three-date SAR data, however, only yielded an overall classification accuracy of 93.89% (kappa = 0.93), and the combination of TM3, TM4, TM5, and 15 June SAR data decreased the classification accuracy slightly (88.08%; kappa = 0.86) from that of TM alone. These results indicate that the synergy of satellite SAR and Landsat TM data can produce much better classification accuracy than that of Landsat TM alone only when careful consideration is given to the temporal compatibility of SAR and visible and infrared data.

  • Preprint Article
  • Cite Count Icon 1
  • 10.5194/egusphere-egu22-10637
A deep learning approach for mapping and monitoring glacial lakes from space
  • Mar 28, 2022
  • Manu Tom + 2 more

&amp;lt;p&amp;gt;Climate change intensifies glacier melt which effectively leads to the formation of numerous new glacial lakes in the overdeepenings of former glacier beds. Additionally, the area of many existing glacial lakes is increasing. More than one thousand glacial lakes have emerged in Switzerland since the Little Ice Age, and hundreds of lakes are expected to form in the 21st century. Rapid deglaciation and formation of new lakes severely affect downstream ecosystem services, hydropower production and high-alpine hazard situations. Day by day, glacier lake inventories for high-alpine terrains are increasingly becoming available to the research community. However, a high-frequency mapping and monitoring of these lakes are necessary to assess hazards and to estimate Glacial Lake Outburst Flood (GLOF) risks, especially for lakes with high seasonal variations. One way to achieve this goal is to leverage the possibilities of satellite-based remote sensing, using optical and Synthetic Aperture Radar (SAR) satellite sensors and deep learning.&amp;lt;/p&amp;gt;&amp;lt;p&amp;gt;There are several challenges to be tackled. Mapping glacial lakes using satellite sensors is difficult, due to the very small area of a great majority of these lakes. The inability of the optical sensors (e.g. Sentinel-2) to sense through clouds creates another bottleneck. Further challenges include cast and cloud shadows, and increased levels of lake and atmospheric turbidity. Radar sensors (e.g. Sentinel-1 SAR) are unaffected by cloud obstruction. However, handling cast shadows and natural backscattering variations from water surfaces are hurdles in SAR-based monitoring. Due to these sensor-specific limitations, optical sensors provide generally less ambiguous but temporally irregular information, while SAR data provides lower classification accuracy but without cloud gaps.&amp;lt;/p&amp;gt;&amp;lt;p&amp;gt;We propose a deep learning-based SAR-optical satellite data fusion pipeline that merges the complementary information from both sensors. We put forward to use Sentinel-1 SAR and Sentinel-2 L2A imagery as input to a deep network with a Convolutional Neural Network (CNN) backbone. The proposed pipeline performs a fusion of information from the two input branches that feed heterogeneous satellite data. A shared block learns embeddings (feature representation) invariant to the input satellite type, which are then fused to guide the identification of glacial lakes. Our ultimate aim is to produce geolocated maps of the target regions where the proposed bottom-up, data-driven methodology will classify each pixel either as &amp;lt;em&amp;gt;lake&amp;lt;/em&amp;gt; or &amp;lt;em&amp;gt;background&amp;lt;/em&amp;gt;. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;This work is part of two major projects: ESA AlpGlacier project that targets mapping and monitoring of the glacial lakes in the Swiss (and European) Alps, and the UNESCO (Adaptation Fund) GLOFCA project that aims to reduce the vulnerabilities of populations in the Central Asian countries (Kazakhstan, Tajikistan, Uzbekistan, and Kyrgyzstan) from GLOFs in a changing climate. As part of the GLOFCA project, we are developing a python-based analytical toolbox for the local authorities, which incorporates the proposed deep learning-based pipeline for mapping and monitoring the glacial lakes in the target regions in Central Asia.&amp;lt;/p&amp;gt;

  • Preprint Article
  • 10.5194/egusphere-egu21-1858
Transitioning SAR-derived Oil Spill Thickness Measurements into an Operational Context
  • Mar 3, 2021
  • Benjamin Holt + 3 more

&amp;lt;p&amp;gt;We describe an effort to develop a quantifiable approach for determining the thicker components of oil spills using microwave synthetic aperture radar (SAR) imagery that can be utilized in an operational context to guide clean-up efforts. The presence of mineral oil on the surface can suppress the SAR returns in two ways. First, surface oil dampens the capillary waves making those areas darker in SAR imagery, an effect that been used to determine oil extent. The second is by modifying the dielectric properties of the surface from those of clean seawater to either pure oil or a mixture of oil and water as the oil weathers and thickens to form an emulsion. The emulsion provides an intermediate conductive surface layer between the highly conductive ocean itself and the very low, &amp;amp;#8216;radar transparent&amp;amp;#8217; sheen layers, resulting in a further reduction in the radar returns for areas with thicker oil within an inhomogeneous oil slick. The challenges are to quantify the thickness and conditions for which this thicker layer becomes separable from the thinner oil, determine whether multiple thicker components can be identified, identify which airborne and spaceborne SAR systems can be used for this purpose, and determine under what range of environmental conditions, particularly wind speed, it is possible.&amp;lt;/p&amp;gt;&amp;lt;p&amp;gt;&amp;amp;#160;&amp;lt;/p&amp;gt;&amp;lt;p&amp;gt;We are planning to hold field campaigns with in situ measurements and SAR and multispectral remote sensor data collections from drones, aircraft, and satellites. The field measurements include surface collections of oil, underwater spectrophotometry, and drone-based infrared, ultraviolet, and optical collections.&amp;amp;#160; Coincident with the field measurements, the airborne L-band NASA-UAVSAR SAR system will image the seep fields to track temporal changes and overpassing satellite acquisitions will be acquired. UAVSAR provides fine resolution, low noise radar imagery under all weather and solar conditions and is fully polarimetric, which enables evaluation of multiple methods to characterize the oil slick. The system noise floor of this instrument, considerably less than all satellite SAR instruments, enables a detailed examination of the zones of reduced backscatter caused by varying oil thickness levels. The primary satellite SAR will be C-band Sentinel-1, accompanied potentially by C-band Radarsat-2 and L-band ALOS-2. Both the UAVSAR and satellite SAR analysis will utilize the contrast ratio, defined as the normalized radar cross section (NRCS) in open water divided by the NRCS in oil-covered water. The larger the ratio, the thicker the oil. The operational algorithm for oil thickness is under development using satellite SAR data and will be staged in NOAA&amp;amp;#8217;s SAR Ocean Product System (SAROPS) that currently produces SAR-derived wind speed and oil spill extent operationally, with the latter using the Texture-Classifying Neural Network (TCNNA) to automatically delineate oil versus non-oil covered areas. We are planning field campaigns at the natural oil seep area offshore of Santa Barbara, California, in March 2021 and during the 2022 Norwegian Clean Sea Association for Operating Companies&amp;amp;#8217; (NOFO&amp;amp;#8217;s) coordinated releases of oil in the North Sea. Recent field collections and analysis will be shown, as available.&amp;lt;/p&amp;gt;

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/eorsa.2012.6261162
A novel classification method based on texture analysis using high-resolution SAR and optical data
  • Jun 1, 2012
  • Yunxiao Luo + 5 more

Data fusion technique is an efficient way to benefit multi-source, multi-platform, and multi-angle remotely sensed information. Optical imagery and SAR (synthetic aperture radar) data are complementary in terms of capability of data acquisition and image characteristics. With their different capability and their unique information content respectively, fusion of high resolution SAR and optical multi-spectral imagery can improve the classification accuracy in land use. Texture information plays an important role for class discrimination especially in SAR imagery for its backscatter is sensitive to the type, orientation, homogeneity and spatial relationship of ground objects. In order to take full advantage of multi-source remotely sensed data and combine different features of them, this paper put forward a data fusion method for high spatial resolution remotely sensed data based on texture analysis. Texture features of high resolution SAR imagery were extracted using GLCM (Grey Level Co-occurrence Matrix) method. The texture features were detected in 0°, 45°, 90° and 135° four directions, and the moving window size of 3×3, 5×5, to 31×31, 41×41, 51×51, and 61×61 were tested to analyze the influences among them. The selected texture features were added with SAR data to make classification next. Both the two imagery were classified using an object-based and rule-based approach. Then, a decision level fusion was implemented and the accuracy of classification result was improved from 78.7% and 83.0% to 88.8%.

  • Research Article
  • 10.1109/lgrs.2019.2944432
Assessing the Usefulness of Iceberg Electromagnetic Backscatter Modeling Using a C-Band SAR Classifier
  • Oct 25, 2019
  • IEEE Geoscience and Remote Sensing Letters
  • Md S Ferdous + 5 more

This letter presents the validation of an electromagnetic (EM) backscatter model of icebergs at C-band by comparing the performances of target classifiers trained with both modeled and real synthetic aperture radar (SAR) data. Simulated SAR data were obtained in a combination of imaging beam modes and scene parameters to produce 216 simulated Sentinel-1 C-band SAR images. Parameters consisted of Sentinel-1 IW1 (33.1°) and IW3 (43.1°) beam modes with varying wind speed (5 and 10 m/s), wind direction (0°, 45°, and 90°), and target orientation (0°, 45°, and 90°). Simulations were created from an EM SAR simulator called GRECOSAR, which took 3-D profiles of iceberg and ship targets and parameters necessary to closely mimic the real scenes. 3-D models of three icebergs were captured in a field study off the coast of Bonavista, Newfoundland, and Labrador, Canada in June 2017. Three generic ship models were sourced from an online inventory and scaled to a size equivalent to that of the iceberg targets. Real SAR image data were drawn from in-house data set collected from a complementary research program. Classifiers including support vector machine (SVM), Random Forest (RanFor), k-nearest neighbor (kNN), and neural network (NN) were trained with targets from modeled SAR data and then gradually mixed with real SAR data. Target classifier performance from the modeled target data was shown to be similar to classifiers trained entirely from real SAR data. The similarity in accuracy provides an indication of the validity of the modeled SAR data for this specific application.

  • Research Article
  • Cite Count Icon 2
  • 10.1007/bf02997072
Interpretation of Synthetic Aperture Radar (SAR) imagery for geological appraisal: A comparative study in Anantapur district of Andhra Pradesh
  • Dec 1, 1990
  • Journal of the Indian Society of Remote Sensing
  • S C Sharma + 3 more

The paper describes the details of a comparative study of geological interpretations carried out from Synthetic Aperture Radar (SAR) imagery, Landsat MSS (B & W) imagery and Aerial Photographs, covering 2100 sq km of area in Anantapur district of Andhra Pradesh. The area comprises Peninsular—Gneissic Complex and rocks of Dharwar and Cuddapah Super Groups beside the Quaternary alluvial deposits along the Penneru river and its tributaries. Geomorphologically the areas is represented by denudational, fluvial and structural landforms. The study indicates that the details of the geological and geomorphological maps prepared from SAR imagery and aerial photographs are comparable despite the smaller scale of SAR imagery while the same are not exhibited in Landsat imagery mainly due to its low resolution. Although broad lithological units are possible to be discriminated on SAR as well as aerial photographs, some of the finer rock types viz. gabbroic dykes could be discriminated from the delerite dykes in the SAR imagery due to their different surface roughness. Stereoscopic coverage and enhanced micro-relief of SAR imagery gives better geomorphological details in comparison to aerial photographs. A detailed study of lineaments has also been carried out which shows that in SAR imagery there is over-representation of short lineaments due to enhanced micro-relief and radarshadow effects across the look direction and under-representation of lineaments along the look direction. Landsat imagery is perhaps the best for demarcating lineaments of regional magnitude while aerial photographs are good for depicting shorter lineaments. However, certain lineaments seen in SAR imagery are often not continuously seen on aerial photographs.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon