Abstract

A distributed architecture for adaptive sensor fusion (a multisensor fusion neural net) is introduced for 3D imagery data that makes use of a super-resolution technique computed with a Bregman-Iteration deconvolution algorithm. This architecture is a cascaded neural network, which consists of two levels of neural networks. The first level consists of sensor networks: two independent sensor neural nets, namely, a spatial neural net and spectral neural net. The second level is a fusion neural net, which contains a single neural net that combines the information from the sensor level. The inputs to the sensor networks are obtained from unsupervised spatial and spectral segmentation algorithms that can be applied to the original imagery or imagery enhanced by a proposed super-resolution process. Spatial segmentation is obtained by a mean-shift method and spectral segmentation is obtained by a Stochastic Expectation Maximization method. The decision outputs from the sensor nets are used to train the fusion net to a specific overall decision. The overall approach is tested with an experiment involving a multi-sensor airborne collection of LIDAR and Hyperspectral data over a university campus in Gulfport MS. The success of the system in utilizing sensor synergism for an enhanced classification is clearly demonstrated. The final class map contains the geographical classes as well as the signature classes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.