Synergistic Use of LiDAR and Hyperspectral Data in Vineyard Classification: A Case Study from the Tokaj Region, Slovakia

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Hyperspectral (HS) and LiDAR sensing provide complementary information for vineyard monitoring. HS imagery captures detailed spectral signals related to canopy physiology and biochemistry but is often affected by contamination from inter-row soil and weeds. LiDAR offers precise measurements of canopy structure yet lacks biochemical sensitivity. Their integration has the potential to overcome these limitations, and remotely piloted (unpiloted) aerial vehicles (UAVs) provide the flexible platform needed to collect both datasets at very high resolution over vineyards. In this study, UAV-based hyperspectral and laser scanning data were collected in the Slovak part of the Tokaj wine region to evaluate their combined potential for distinguishing vine from non-vine areas, producing dense point clouds with more than 600 points per square metre and hyperspectral imagery consisting of 172 bands at 0.1 m spatial resolution. Four datasets were prepared: hyperspectral imagery alone, hyperspectral imagery combined with canopy height, simulated natural colour imagery alone, and simulated natural colour imagery combined with canopy height. All datasets were transformed using principal component analysis, and the resulting features were classified with a supervised maximum likelihood classifier. Accuracy was evaluated using 1,000 field-validated reference points. The classification based only on the hyperspectral data reached 89% overall accuracy but performed poorly for vine detection, with a producer’s accuracy of 48.9% and an F1-score of 0.61. When canopy height information was included, performance improved to 96% overall accuracy, a Kappa coefficient of 0.85, and an F1-score of 0.88. Simulated natural-colour imagery combined with canopy height achieved intermediate results, with 93% overall accuracy and an F1-score of 0.79. These findings confirm that integrating spectral and structural information enhances vineyard mapping and provides a reliable basis for precision viticulture applications.

Similar Papers
  • Conference Article
  • Cite Count Icon 7
  • 10.1109/igarss.2005.1526783
Segment-based characterization of roof surfaces using hyperspectral and laser scanning data
  • Jul 25, 2005
  • D Lemp + 1 more

Using remote sensing for urban applications makes high demands on the resolution of the used data - not only concerning its geometric resolution, in terms of ground sampling distance, but also concerning the spectral resolution, in terms of the number of narrow bands, allowing an almost continuous representation of the spectrum. In order to deal with the vari- ability and number of different surface materials with sometimes quite similiar spectral properties, hyperspectral data with its high spectral resolution seems to be mandatory for applications depending on classification of urban surface materials. A recent project of the Chair of Water Chemistry, Engler-Bunte-Institute (EBI), and the Institute of Photogrammetry and Remote Sensing (IPF) - both University of Karlsruhe - aims at the quantitative assessment of pollutants on urban surfaces by chemical analysis and image processing methods. Our research focus at IPF is the characterization of roof surfaces by combined use of hyperspectral and laser scanning data using a segment-based approach. The laser scanning data is primarily used for geometric characterization of the roof patches, but also in combination with the hyperspectral data for material classification. The hyperspectral data already gives rich information about the material, nevertheless the geometry of the roof surface restricts the possible material classes and therefore eases discrimination of materials with almost similar spectra. I. INTRODUCTION The assessment of pollutants on urban surfaces and their impact on the pollution load in rain runoffs is a small, but nevertheless important topic in the assessment of the influence of human activity on the status of surface waters and groundwater. Thus, the aim of our research project is not only to derive information on the amount of sealed surfaces in an urban area, but to derive a detailed surface material map. The necessary classes for our application are identified based on chemical measurements on reference roof surfaces, observing that different roof constructions/materials may have similar polluting behaviour. This allows merging of classes with respect to the resulting pollution, although they may have different spectral properties. One example are those material combinations including a bitumen layer and a covering layer from stone materials. The pure material-spectra-oriented clas- sification (cf. (1)) is in our approach supported by geometric clues of surface patches, thus combining geometric data from laser scanning and hyperspectral data for the characterization of roof segments. In the following, we give a short overview on related work. Section III introduces the input data. Our approach for the characterization of roof surfaces in urban areas is presented in Section IV. Recent results as well as a quantitative evaluation follow in Section V, finalized by the conclusions. II. RELATED WORK Laser scanning and hyperspectral data are often used exclu- sively, either to derive the geometry based on laser scanning data (cf. (2)) or to derive material maps based on hyperspectral data (cf. (1)). (3) use hyperspectral data (AVIRIS) in order to improve reconstruction results based on IFSAR, namely to mask vegetation areas, but the used data has only limited geo- metric resolution. In (4), they present results of hyperspectral data analysis for urban areas based on ROSIS and DAIS data, also discussing the impact of spectral and geometric resolution. (5) integrate Digital Surface Model (DSM) information in or- der to improve the results of hyperspectral classification based on HYDICE data. In their research the DSM, derived from aerial imagery, is applied for the discrimination of roofs and ground surfaces. The materials may have a similar spectrum, but they can be discriminated based on the height information. (6) show material mapping techniques based on deterministic similarity measures for spectral matching to separate target from non-target pixels in urban areas.

  • Research Article
  • Cite Count Icon 34
  • 10.1080/15481603.2020.1829377
Peatland leaf-area index and biomass estimation with ultra-high resolution remote sensing
  • Oct 2, 2020
  • GIScience & Remote Sensing
  • Aleksi Räsänen + 7 more

There is fine-scale spatial heterogeneity in key vegetation properties including leaf-area index (LAI) and biomass in treeless northern peatlands, and hyperspectral drone data with high spatial and spectral resolution could detect the spatial patterns with high accuracy. However, the advantage of hyperspectral drone data has not been tested in a multi-source remote sensing approach (i.e. inclusion of multiple different remote sensing datatypes); and overall, sub-meter-level leaf-area index (LAI) and biomass maps have largely been absent. We evaluated the detectability of LAI and biomass patterns at a northern boreal fen (Halssiaapa) in northern Finland with multi-temporal and multi-source remote sensing data and assessed the benefit of hyperspectral drone data. We measured vascular plant percentage cover and height as well as moss cover in 140 field plots and connected the structural information to measured aboveground vascular LAI and biomass and moss biomass with linear regressions. We predicted both total and plant functional type (PFT) specific LAI and biomass patterns with random forests regressions with predictors including RGB and hyperspectral drone (28 bands in a spectral range of 500–900 nm), aerial and satellite imagery as well as topography and vegetation height information derived from structure-from-motion drone photogrammetry and aerial lidar data. The modeling performance was between moderate and good for total LAI and biomass (mean explained variance between 49.8 and 66.5%) and variable for PFTs (0.3–61.6%). Hyperspectral data increased model performance in most of the regressions, usually relatively little, but in some of the regressions, the inclusion of hyperspectral data even decreased model performance (change in mean explained variance between −14.5 and 9.1%-points). The most important features in regressions included drone topography, vegetation height, hyperspectral and RGB features. The spatial patterns and landscape estimates of LAI and biomass were quite similar in regressions with or without hyperspectral data, in particular for moss and total biomass. The results suggest that the fine-scale spatial patterns of peatland LAI and biomass can be detected with multi-source remote sensing data, vegetation mapping should include both spectral and topographic predictors at sub-meter-level spatial resolution and that hyperspectral imagery gives only slight benefits.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 4
  • 10.3390/s24072089
A Study on Dimensionality Reduction and Parameters for Hyperspectral Imagery Based on Manifold Learning.
  • Mar 25, 2024
  • Sensors
  • Wenhui Song + 5 more

With the rapid advancement of remote-sensing technology, the spectral information obtained from hyperspectral remote-sensing imagery has become increasingly rich, facilitating detailed spectral analysis of Earth's surface objects. However, the abundance of spectral information presents certain challenges for data processing, such as the "curse of dimensionality" leading to the "Hughes phenomenon", "strong correlation" due to high resolution, and "nonlinear characteristics" caused by varying surface reflectances. Consequently, dimensionality reduction of hyperspectral data emerges as a critical task. This paper begins by elucidating the principles and processes of hyperspectral image dimensionality reduction based on manifold theory and learning methods, in light of the nonlinear structures and features present in hyperspectral remote-sensing data, and formulates a dimensionality reduction process based on manifold learning. Subsequently, this study explores the capabilities of feature extraction and low-dimensional embedding for hyperspectral imagery using manifold learning approaches, including principal components analysis (PCA), multidimensional scaling (MDS), and linear discriminant analysis (LDA) for linear methods; and isometric mapping (Isomap), locally linear embedding (LLE), Laplacian eigenmaps (LE), Hessian locally linear embedding (HLLE), local tangent space alignment (LTSA), and maximum variance unfolding (MVU) for nonlinear methods, based on the Indian Pines hyperspectral dataset and Pavia University dataset. Furthermore, the paper investigates the optimal neighborhood computation time and overall algorithm runtime for feature extraction in hyperspectral imagery, varying by the choice of neighborhood k and intrinsic dimensionality d values across different manifold learning methods. Based on the outcomes of feature extraction, the study examines the classification experiments of various manifold learning methods, comparing and analyzing the variations in classification accuracy and Kappa coefficient with different selections of neighborhood k and intrinsic dimensionality d values. Building on this, the impact of selecting different bandwidths t for the Gaussian kernel in the LE method and different Lagrange multipliers λ for the MVU method on classification accuracy, given varying choices of neighborhood k and intrinsic dimensionality d, is explored. Through these experiments, the paper investigates the capability and effectiveness of different manifold learning methods in feature extraction and dimensionality reduction within hyperspectral imagery, as influenced by the selection of neighborhood k and intrinsic dimensionality d values, identifying the optimal neighborhood k and intrinsic dimensionality d value for each method. A comparison of classification accuracies reveals that the LTSA method yields superior classification results compared to other manifold learning approaches. The study demonstrates the advantages of manifold learning methods in processing hyperspectral image data, providing an experimental reference for subsequent research on hyperspectral image dimensionality reduction using manifold learning methods.

  • Research Article
  • Cite Count Icon 54
  • 10.1080/22797254.2018.1434424
Tree species classification in Norway from airborne hyperspectral and airborne laser scanning data
  • Jan 1, 2018
  • European Journal of Remote Sensing
  • Øivind Due Trier + 6 more

ABSTRACTThis article compares four new automatic methods to discriminate between spruce, pine and birch, which are the dominating tree species in Norwegian forests. Airborne laser scanning and hyperspectral data were used. The laser scanning data was used to mask pixels with low or no vegetation in the hyperspectral data. A green–blue ratio was used to remove shadow areas from tree canopies, and the normalized difference vegetation index to remove dead vegetation and non-vegetation. The best method was hyperspectral pixel classification with 160 spectral channels in the visible and near-infrared spectrum, using a deep neural network. This method achieved 87% correct classification rate. Partial least squares regression for hyperspectral pixel classification achieved 78%. Deep neural network image classification using canopy height blended with three hyperspectral channels achieved 74%. A simple pixel classification method based on two spectral indices resulted in 67% correct classification. A possible future improvement is to find a better way to combine hyperspectral data with canopy height data in a deep neural network.

  • Research Article
  • 10.25932/publishup-52057
DeepGeoMap : a deep learning convolutional neural network architecture for geological hyperspectral classification and mapping
  • Jan 1, 2021
  • Helge Leoard Carl Dämpfling

In recent years, deep learning improved the way remote sensing data is processed. The classification of hyperspectral data is no exception. 2D or 3D convolutional neural networks have outperformed classical algorithms on hyperspectral image classification in many cases. However, geological hyperspectral image classification includes several challenges, often including spatially more complex objects than found in other disciplines of hyperspectral imaging that have more spatially similar objects (e.g., as in industrial applications, aerial urban- or farming land cover types). In geological hyperspectral image classification, classical algorithms that focus on the spectral domain still often show higher accuracy, more sensible results, or flexibility due to spatial information independence. In the framework of this thesis, inspired by classical machine learning algorithms that focus on the spectral domain like the binary feature fitting- (BFF) and the EnGeoMap algorithm, the author of this thesis proposes, develops, tests, and discusses a novel, spectrally focused, spatial information independent, deep multi-layer convolutional neural network, named 'DeepGeoMap’, for hyperspectral geological data classification. More specifically, the architecture of DeepGeoMap uses a sequential series of different 1D convolutional neural networks layers and fully connected dense layers and utilizes rectified linear unit and softmax activation, 1D max and 1D global average pooling layers, additional dropout to prevent overfitting, and a categorical cross-entropy loss function with Adam gradient descent optimization. DeepGeoMap was realized using Python 3.7 and the machine and deep learning interface TensorFlow with graphical processing unit (GPU) acceleration. This 1D spectrally focused architecture allows DeepGeoMap models to be trained with hyperspectral laboratory image data of geochemically validated samples (e.g., ground truth samples for aerial or mine face images) and then use this laboratory trained model to classify other or larger scenes, similar to classical algorithms that use a spectral library of validated samples for image classification. The classification capabilities of DeepGeoMap have been tested using two geological hyperspectral image data sets. Both are geochemically validated hyperspectral data sets one based on iron ore and the other based on copper ore samples. The copper ore laboratory data set was used to train a DeepGeoMap model for the classification and analysis of a larger mine face scene within the Republic of Cyprus, where the samples originated from. Additionally, a benchmark satellite-based dataset, the Indian Pines data set, was used for training and testing. The classification accuracy of DeepGeoMap was compared to classical algorithms and other convolutional neural networks. It was shown that DeepGeoMap could achieve higher accuracies and outperform these classical algorithms and other neural networks in the geological hyperspectral image classification test cases. The spectral focus of DeepGeoMap was found to be the most considerable advantage compared to spectral-spatial classifiers like 2D or 3D neural networks. This enables DeepGeoMap models to train data independently of different spatial entities, shapes, and/or resolutions.

  • Research Article
  • Cite Count Icon 21
  • 10.1007/s12205-013-1178-z
Extraction of individual tree crown using hyperspectral image and LiDAR data
  • Oct 3, 2014
  • KSCE Journal of Civil Engineering
  • Hien Phu La + 3 more

Extraction of individual tree crown using hyperspectral image and LiDAR data

  • Book Chapter
  • 10.5772/14662
EM-Based Bayesian Fusion of Hyperspectral and Multispectral Images
  • Jan 12, 2011
  • Yifan Zhang

During the last two decades, the number of spectral bands in optical remote sensing technology kept growing steadily going from multispectral (MS) to hyperspectral (HS) data sets. HS images employ hundreds of contiguous spectral bands to capture and process spectral information over a range of wavelenghts, compared to the tens of discrete spectral bands used in MS images (Chang, 2003). This increase in spectral accuracy is delivering more information, allowing a whole range of new and more precise applications. The detailed spectral information of HS images is helpful for interpretation, classification and recognition. However, in remote sensors, usually a trade-off exists between SNR, spatial and spectral resolutions due to physical limitations, data-transfer requirements and some other practical reasons. In most cases, high spatial and spectral resolutions are not available in a single image, which makes the spatial resolution of HS images usually lower than that of MS images (Gomez et al., 2001). In practice, many applications require high accuracy both spectrally and spatially, which inspires research on spatial resolution enhancement techniques for HS image (Gomez et al., 2001; Duijster et al., 2009; Zhang & He, 2007; Hardie et al., 2004; Eismann & Hardie, 2005; 2004). When more than one observation of the scene is available, a popular technique dealing with this limitation is image fusion, a well studied field for more than ten years. As a prototype problem, usually an image of high spectral resolution is combined with an image of high spatial resolution to obtain an image of optimal resolutions both spectrally and spatially. Most fusion techniques for spatial resolution improvement were developed for the specific purpose of enhancing MS image by using a panchromatic (Pan) image of higher spatial resolution, also referred to as pansharpening. Principal component analysis (PCA) (Chavez et al., 1991; Shettigara, 1992) and Intensity-Hue-Saturation (IHS) transform (Carper et al., 1990; Edwards & Davis, 1994; Tu et al., 2001) based techniques are the most commonly used ones. The Pan image is applied to totally or partially substitute the 1st principal component or intensity component of the coregistered and resampled MS image. To generalize to more than three bands and to reduce spectral degradation, generalized IHS (GIHS) transforms (Tu et al., 2004) and generalized intensity modulation techniques (Alparone et al., 2004) were defined. High-pass filtering and high-pass modulation techniques were developed (Chavez et al., 1991; Shettigara, 1992; Liu &Moore, 1998), in which spatial high-frequency information is extracted and injected adequately into each band of the MS image. With the rise of multiresolution analysis, many researchers have proposed pansharpening techniques, using Gaussian and Laplacian pyramids as well as discrete decimated and undecimated wavelet transforms (WTs) 6

  • Research Article
  • Cite Count Icon 59
  • 10.1016/j.ecolind.2017.10.066
Predicting stem diameters and aboveground biomass of individual trees using remote sensing data
  • Nov 5, 2017
  • Ecological Indicators
  • Michele Dalponte + 5 more

Predicting stem diameters and aboveground biomass of individual trees using remote sensing data

  • Research Article
  • Cite Count Icon 86
  • 10.1111/j.1365-2621.2005.tb11517.x
Detection of Fecal Contamination on Cantaloupes Using Hyperspectral Fluorescence Imagery
  • Oct 1, 2005
  • Journal of Food Science
  • Angela M Vargas + 7 more

ABSTRACTTo determine whether detection of fecal contamination on cantaloupes is possible using fluorescence imaging, hyperspectral images of cantaloupes artificially contaminated with a range of diluted bovine feces were acquired from 425 to 774 nm in responses to ultraviolet‐A (320 to 400 nm) excitation. Evaluation of images at emission peak wavelengths indicated that 675 nm exhibited the greatest contrast between feces contaminated and untreated surface areas. Two‐band ratios compared with the single‐band images enhanced the contrast between the feces contaminated spots and untreated cantaloupe surfaces. The 595/655‐nm, 655/520‐nm, and 555/655‐nm ratio images provided relatively high detection rates ranging from 79% to 96% across all feces dilutions. However, both single band and ratio methods showed a number of false positives caused by such features as scarred tissues on cantaloupes. Principal component analysis (PCA) was performed using the entire hyperspectral images data; 2nd and 5th principal component (PC) image exhibited differential responses between feces spots and false positives. The combined use of the 2 PC images demonstrated the detection of feces spots (for example, minimum level of 16‐μg/mL dry fecal matter) with minimal false positives. Based on the PC weighing coefficients, the dominant wavelengths were 465, 487, 531, 607, 643, and 688 nm. This research demonstrated the potential of multispectral‐based fluorescence imaging for online applications for detection of fecal contamination on cantaloupes.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 80
  • 10.3390/rs11212540
Detection of Pine Shoot Beetle (PSB) Stress on Pine Forests at Individual Tree Level using UAV-Based Hyperspectral Imagery and Lidar
  • Oct 29, 2019
  • Remote Sensing
  • Qinan Lin + 4 more

In recent years, the outbreak of the pine shoot beetle (PSB), Tomicus spp., has caused serious shoots damage and the death of millions of trees in Yunnan pine forests in southwestern China. It is urgent to develop a convincing approach to accurately assess the shoot damage ratio (SDR) for monitoring the PSB insects at an early stage. Unmanned airborne vehicles (UAV)-based sensors, including hyperspectral imaging (HI) and lidar, have very high spatial and spectral resolutions, which are very useful to detect forest health. However, very few studies have utilized HI and lidar data to estimate SDRs and compare the predictive power for mapping PSB damage at the individual tree level. Additionally, the data fusion of HI and lidar may improve the detection accuracy, but it has not been well studied. In this study, UAV-based HI and lidar data were fused to detect PSB. We systematically evaluated the potential of a hyperspectral approach (only-HI data), a lidar approach (only-lidar data), and a combined approach (HI plus lidar data) to characterize PSB damage of individual trees using the Random Forest (RF) algorithm, separately. The most innovative point is the proposed new method to extract the three dimensional (3D) shadow distribution of each tree crown based on a lidar point cloud and the 3D radiative transfer model RAPID. The results show that: (1) for the accuracy of estimating the SDR of individual trees, the lidar approach (R2 = 0.69, RMSE = 12.28%) performed better than hyperspectral approach (R2 = 0.67, RMSE = 15.87%), and in addition, it was useful to detect dead trees with an accuracy of 70%; (2) the combined approach has the highest accuracy (R2 = 0.83, RMSE = 9.93%) for mapping PSB damage degrees; and (3) when combining HI and lidar data to predict SDRs, two variables have the most contributions, which are the leaf chlorophyll content (Cab) derived from hyperspectral data and the return intensity of the top of shaded crown (Int_Shd_top) from lidar metrics. This study confirms the high possibility to accurately predict SDRs at individual tree level if combining HI and lidar data. The 3D radiative transfer model can determine the 3D crown shadows from lidar, which is a key information to combine HI and lidar. Therefore, our study provided a guidance to combine the advantages of hyperspectral and lidar data to accurately measure the health of individual trees, enabling us to prioritize areas for forest health promotion. This method may also be used for other 3D land surfaces, like urban areas.

  • Conference Article
  • Cite Count Icon 5
  • 10.1117/12.810027
Hyperspectral imaging of blood perfusion and chromophore distribution in skin
  • Feb 12, 2009
  • Lise L Randeberg + 2 more

Hyperspectral imaging is a modality which combines spatial resolution and spectroscopy in one technique. Analysis of hyperspectral data from biological samples is a demanding task due to the large amount of data, and due to the complex optical properties of biological tissue. In this study it was investigated if depth information could be revealed from hyperspectral images using a combination of image analysis and analytic simulations of skin reflectance. It was also investigated if hyperspectral imaging could be utilized to monitor changes in the distribution of hemoglobin species after smoking. Hyperspectral data in the wavelength range 400-1000nm were collected from the forearm of 15 non-smokers and 5 smokers. The hyperspectral images were analyzed with respect to the distribution of hemoglobin species and vascular structures. Changes in the vascular system due to smoking were also evaluated. Principal component analysis (PCA), Spectral angle mapping (SAM), and Mixture tuned matched filtering (MTMF) were used to enhance vascular structures. Emphasis was put on identifying apparent and true absorption spectra for the present chromophores by combining image analysis and an analytical photon transport model. The results show that the depth resolution of hyperspectral images can be better understood using analytical simulations. Modulation of the chromophore spectra by the optical properties of overlying tissue was found to be an important mechanism causing the depth resolution in hyperspectral images. It was also found that hyperspectral imaging and image analysis can be successfully applied to quantify skin changes following smoking.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 8
  • 10.3390/rs16050875
Spatial Resolution Enhancement of Vegetation Indexes via Fusion of Hyperspectral and Multispectral Satellite Data
  • Mar 1, 2024
  • Remote Sensing
  • Luciano Alparone + 2 more

The definition and calculation of a spectral index suitable for characterizing vegetated landscapes depend on the number and widths of the bands of the imaging instrument. Here, we point out the advantages of performing the fusion of hyperspectral (HS) satellite data with the multispectral (MS) bands of Sentinel-2 to calculate such vegetation indexes as the normalized area over reflectance curve (NAOC) and the red-edge inflection point (REIP), which benefit from the availability of quasi-continuous pixel spectra. Unfortunately, MS data may be acquired from satellite platforms with very high spatial resolution; HS data may not. Despite their excellent spectral resolution, satellite imaging spectrometers currently resolve areas not greater than 30 × 30 m2, where different thematic classes of landscape may be mixed together to form a unique pixel spectrum. A way to resolve mixed pixels is to perform the fusion of the HS dataset with the same dataset produced by an MS scanner that images the same scene with a finer spatial resolution. The HS dataset is sharpened from 30 m to 10 m by means of the Sentinel-2 bands that have all been previously brought to 10 m. To do so, the hyper-sharpening protocol, that is, m:n fusion, is exploited in two nested steps: the first one to bring the 20 m bands of Sentinel-2 all to 10 m, the second one to sharpen all the 30 m HS bands to 10 m by using the Sentinel-2 bands previously hyper-sharpened to 10 m. Results are presented on an agricultural test site in The Netherlands imaged by Sentinel-2 and by the satellite imaging spectrometer recently launched as a part of the environmental mapping and analysis program (EnMAP). Firstly, the excellent match of statistical consistency of the fused HS data to the original MS and HS data is evaluated by means of analysis tools, existing and developed ad hoc for this specific case. Then, the spatial and radiometric accuracy of REIP and NAOC calculated from fused HS data are analyzed on the classes of pure and mixed pixels. On pure pixels, the values of REIP and NAOC calculated from fused data are consistent with those calculated from the original HS data. Conversely, mixed pixels are spectrally unmixed by the fusion process to resolve the 10 m scale of the MS data. How the proposed method can be used to check the temporal evolution of vegetation indexes when a unique HS image and many MS images are available is the object of a final discussion.

  • Conference Article
  • Cite Count Icon 2
  • 10.1117/12.929803
A new digital repository for remotely sensed hyperspectral imagery with unmixing-based retrieval functionality
  • Oct 19, 2012
  • Jorge Sevilla + 3 more

Hyperspectral imaging is concerned with the measurement, analysis, and interpretation of spectra acquired from a given scene (or specific object) at a short, medium or long distance by an airbone or satellite sensor. Over the last few years, hyperspectral image data sets have been collected for a great amount of locations over the world, using a variety of instruments for Earth observation. Despite the increasing importance of hyperspectral images in remote sensing applications, there is no common repository of hyperspectral data intended to distribute and share hyperspectral data sets in the community. Quite opposite, the hyperspectral data sets which are available for public use are spread among different storage locations and present significant heterogeneity regarding the storage format, associated meta-data (if any), or ground-truth availability. As a result, the development of a standardized hyperspectral data repository is a highly desired goal in the remote sensing community. In this paper, we take a necessary first step towards the development of a digital repository for remotely sensed hyperspectral data. The proposed system allows uploading new hyperspectral data sets along with meta-data, ground-truth and analysis results, with the ultimate goal of sharing publicly available hyperspectral images within the remote sensing community. The database has been designed in order to allow storing relevant information for the hyperspectral data available through the system, including basic image characteristics (width, height, number of bands, format) and more advanced meta-data (ground-truth information, publications in which the data has been used). The current implementation consists of a front-end to ease the management of images through a web interface, thus containing both synthetic and real hyperspectral images from two highly representative instruments, such as NASAs Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) over the Cuprite Mining District in Nevada. Most importantly, the developed system includes a spectral unmixing-based content based image retrieval (CBIR) functionality which allows searching for images on the spectral unmixing information (spectrally pure components or endmembers and their associated abundances in the scene). This information is stored as meta-data associated to each hyperspectral image instance, and then used to search and retrieve images based on information content. This paper presents the design of the system and a preliminary validation of the unmixing-based retrieval functionality using both synthetic and real hyperspectral images stored in the database.

  • Research Article
  • Cite Count Icon 25
  • 10.3390/rs9060589
Mapping Spartina alterniflora Biomass Using LiDAR and Hyperspectral Data
  • Jun 10, 2017
  • Remote Sensing
  • Jing Wang + 3 more

Large-scale coastal reclamation has caused significant changes in Spartina alterniflora (S. alterniflora) distribution in coastal regions of China. However, few studies have focused on estimation of the wetland vegetation biomass, especially of S. alterniflora, in coastal regions using LiDAR and hyperspectral data. In this study, the applicability of LiDAR and hypersectral data for estimating S. alterniflora biomass and mapping its distribution in coastal regions of China was explored to attempt problems of wetland vegetation biomass estimation caused by different vegetation types and different canopy height. Results showed that the highest correlation coefficient with S. alterniflora biomass was vegetation canopy height (0.817), followed by Normalized Difference Vegetation Index (NDVI) (0.635), Atmospherically Resistant Vegetation Index (ARVI) (0.631), Visible Atmospherically Resistant Index (VARI) (0.599), and Ratio Vegetation Index (RVI) (0.520). A multivariate linear estimation model of S. alterniflora biomass using a variable backward elimination method was developed with R squared coefficient of 0.902 and the residual predictive deviation (RPD) of 2.62. The model accuracy of S. alterniflora biomass was higher than that of wetland vegetation for mixed vegetation types because it improved the estimation accuracy caused by differences in spectral features and canopy heights of different kinds of wetland vegetation. The result indicated that estimated S. alterniflora biomass was in agreement with the field survey result. Owing to its basis in the fusion of LiDAR data and hyperspectral data, the proposed method provides an advantage for S. alterniflora mapping. The integration of high spatial resolution hyperspectral imagery and LiDAR data derived canopy height had significantly improved the accuracy of mapping S. alterniflora biomass.

  • Conference Article
  • Cite Count Icon 6
  • 10.1109/whispers.2016.8071662
Fusion of hyperspectral and LiDAR data using random feature selection and morphological attribute profiles
  • Aug 1, 2016
  • Sathishkumar Samiappan + 2 more

Hyperspectral imagery provides detailed information about land-cover materials over a wide spectral range. Land-cover classification using hyperspectral data has been an active topic of research. Elevation data from light detection and ranging (LiDAR) can aid the classification process in discriminating complex classes. Fusion of hyperspectral and LiDAR data has been investigated in the past where the goal was to extract features from both sources and combine them to improve the accuracy of land-cover classification. In this paper, we present a new fusion approach based on random feature selection (RFS) and morphological attribute profiles (AP). Our experimental study, conducted on a hyperspectral image and digital surface model (DSM) derived from first return LiDAR data collected over the Samford ecological research facility, Queensland, Australia indicate that the proposed approach yields excellent classification results.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.