Abstract

Illumination variation (IV) is a challenging issue in hyperspectral video target tracking (HVT). In order to resolve this issue, this paper proposes a novel HVT algorithm based on Pixel-wise Spectral Matching Reduction (PSMR) and Deep Spectral Cascading Texture (Deep-SCT) features. The PSMR is a novel dimensionality reduction method that approximately segments the target and background while compressing the hyperspectral image data. The Deep-SCT features are composed of spectral cascading texture (SCT) features and deep features. The local binary pattern operator and illumination invariance along with the Deep-SCT features overcome the interference caused by IV. In addition, we propose a feature fusion method, called group-pixel joint convolution, which fuses SCT features and deep features. Moreover, the segmentation results, produced by the dimensionality reduction process, are used to generate a coarse location mask. This mask is then used to predict the location of target and also to suppress the texture features of the background. Finally, after localizing the target using the Deep-SCT features, the proposed algorithm uses a step-by-step estimation strategy to adjust the size of the target bounding box. Experiments, over benchmark datasets, illustrate the superior performance of the proposed tracker in comparison with the state-of-the-art approaches. It is evident that the proposed strategy is highly robust against IV.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call