Abstract

Abstract. In recent years, there has been a growing number of small hyperspectral sensors suitable for deployment on unmanned aerial systems (UAS. The introduction of the hyperspectral snapshot sensor provides interesting opportunities for acquisition of three-dimensional (3D) hyperspectral point clouds based on the structure-from-motion (SfM) workflow. In this study, we describe the integration of a 25-band hyperspectral snapshot sensor (PhotonFocus camera with IMEC 600 – 875 nm 5x5 mosaic chip) on a multi-rotor UAS. The sensor was integrated with a dual frequency GNSS receiver for accurate time synchronisation and geolocation. We describe the sensor calibration workflow, including dark current and flat field characterisation. An SfM workflow was implemented to derive hyperspectral 3D point clouds and orthomosaics from overlapping frames. On-board GNSS coordinates for each hyperspectral frame assisted in the SfM process and allowed for accurate direct georeferencing (< 10 cm absolute accuracy). We present the processing workflow to generate seamless hyperspectral orthomosaics from hundreds of raw images. Spectral reference panels and in-field spectral measurements were used to calibrate and validate the spectral signatures. This process provides a novel data type which contains both 3D, geometric structure and detailed spectral information in a single format. First, to determine the potential improvements that such a format could provide, the core aim of this study was to compare the use of 3D hyperspectral point clouds to conventional hyperspectral imagery in the classification of two Eucalyptus tree species found in Tasmania, Australia. The IMEC SM5x5 hyperspectral snapshot sensor was flown over a small native plantation plot, consisting of a mix of the Eucalyptus pauciflora and E. tenuiramis species. High overlap hyperspectral imagery was captured and then processed using SfM algorithms to generate both a hyperspectral orthomosaic and a dense hyperspectral point cloud. Additionally, to ensure the optimum spectral quality of the data, the characteristics of the hyperspectral snapshot imaging sensor were analysed utilising measurements captured in a laboratory environment. To coincide with the generated hyperspectral point cloud data, both a file format and additional processing and visualisation software were developed to provide the necessary tools for a complete classification workflow. Results based on the classification of the E. pauciflora and E. tenuiramis species revealed that the hyperspectral point cloud produced an increased classification accuracy over conventional hyperspectral imagery based on random forest classification. This was represented by an increase in classification accuracy from 67.2% to 73.8%. It was found that even when applied separately, the geometric and spectral feature sets from the point cloud both provided increased classification accuracy over the hyperspectral imagery.

Highlights

  • Recent advances in sensor technologies have yielded a new breed of hyperspectral snapshot imaging sensors which allow the capture of full-frame, hyperspectral images in a single exposure (Geelen et al 2015)

  • This is characterised by the use of a multispectral filter array (MSFA), in which a mosaic of wavelength specific filters are arranged on the image sensor

  • Significant variation is observed in the individual classification results, with cross-validation accuracies ranging from 56.9% to 78.4%

Read more

Summary

Introduction

Recent advances in sensor technologies have yielded a new breed of hyperspectral snapshot imaging sensors which allow the capture of full-frame, hyperspectral images in a single exposure (Geelen et al 2015). The snapshot imaging sensors circumvent the scanning process, by extending the technology used in many modern-day digital cameras This is characterised by the use of a multispectral filter array (MSFA), in which a mosaic of wavelength specific filters are arranged on the image sensor. A key implication of the ability to capture full-frame hyperspectral images in this way, is they retain the geometric constraints of standard optical imagery. This opens the possibility to apply conventional photogrammetric and structure from motion (SfM) principles to images with overlapping extent, in order to derive three dimensional (3D) structural information in the form of point clouds (Aasen et al 2015). This produces a rich new data source, which effectively combines the desirable aspects of both passive optical and LiDAR point clouds, and has potential to lead to more robust classification methodologies

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call