Abstract
Spatial infrared (IR) moving-target sample data are difficult to collect; thus, they cannot be used for mass-training, and limit development and application of deep learning methods in spatial IR target recognition. This study proposes a transfer-learning method based on fully convolutional network (FCN) for spatial IR moving-target recognition. First, we built a set of semi-physical scale models of spatial IR moving targets and obtained IR radiation characteristics as target-domain data using actual shot data and image inversion. Simulation data of the spatial target were obtained using the IR radiation intensity model, and used as the source-domain data together with the public UCR dataset. We used the dynamic time warping distance to measure the similarity between datasets to select appropriate source-domain datasets. We used an FCN framework as the main classification framework with a large number of source-domain samples for training. Knowledge transfer and model fine-tuning facilitate target-domain learning using only a few samples to improve classification accuracy. The experimental results show that the proposed model achieved an accuracy rate of 78%, which increases by the 20% improvement compared to direct training on the target domain. This study verifies the effectiveness of transfer learning between spatial targets and related domains and provides a reference for application of deep learning in spatial situational awareness.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.