Abstract
Target detection has been of great interest in hyperspectral image analysis. Feature extraction from target samples and counterpart backgrounds consist the key to the problem. Traditional target detection methods depend on comparatively fixed feature for all the pixels under observation. For example, RX employs the same distance measurement for all the pixels. However, the best separation results usually come from certain targets and backgrounds. Theoretically, they are the purest targets and backgrounds pixels, or the constructive endmembers in the subspace model. So using those most representative pixels' feature to train a concentrated subspace is expected to enhance the separability between targets and backgrounds. Meanwhile, applying the discriminative information from these training data to the large testing data which are not in the same feature space and with different data distributions is a challenge. Here, the idea of transfer learning from interactive annotation technique in video is employed. Based on the transfer learning frame, several points are taken into consideration and the proposed method is named as an unsupervised transfer learning based target detection (UTLD) method. Firstly, the extreme target and background pixels are generated from robust outlier detection, providing the input for target samples and background samples in transfer learning. Secondly, pixels are calculated from the root points in a segmentation method with the purpose to preserve the most distribution feature of the backgrounds after reduced dimension. Thirdly, sparse constraint is imposed into the transfer learning procedure. With this constraint, a simpler and more concentrated subspace with clear physical meaning can be constructed. Extensive experiments reveal the performance is comparable to the state-of-art target detection methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.