Abstract

Attributing to material identification ability powered by a large number of spectral bands, hyperspectral videos (HSVs) have great potential for object tracking. Most hyperspectral trackers employ manually designed features rather than deeply learned features to describe objects due to limited available HSVs for training, leaving a huge gap to improve the tracking performance. In this paper, we propose an end-to-end deep ensemble network (SEE-Net) to address this challenge. Specifically, we first establish a spectral self-expressive model to learn the band correlation, indicating the importance of a single band in forming hyperspectral data. We parameterize the optimization of the model with a spectral self-expressive module to learn the nonlinear mapping from input hyperspectral frames to band importance. In this way, the prior knowledge of bands is transformed into a learnable network architecture, which has high computational efficiency and can fast adapt to the changes of target appearance because of no iterative optimization. The band importance is further exploited from two aspects. On the one hand, according to the band importance, each frame of HSVs is divided into several three-channel false-color images which are then used for deep feature extraction and location. On the other hand, based on the band importance, the importance of each false-color image is computed, which is then used to assemble the tracking results from individual false-color images. In this way, the unreliable tracking caused by false-color images of low importance can be suppressed to a large extent. Extensive experimental results show that SEE-Net performs favorably against the state-of-the-art approaches. The source code will be available at https://github.com/hscv/SEE-Net.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.