Abstract

Early action prediction is a new hotspot in the field of computer vision. To improve the accuracy of early action prediction, a new end-to-end late feature supplement-based early action prediction network is proposed in this work. Different from the existing methods that use the model transfer strategy, a feature transfer strategy is defined in this work. Specifically, the features of the late clip are regarded as labels, and a feature transfer model is built to achieve mapping from the features of the early clip to the late features. After feature transfer, the generated late feature is fused into the early feature to form the final video feature. Finally, the final video feature is applied to action classification. The proposed method is evaluated on the action classification task of the early clip. The experimental results show that compared with the existing methods, the proposed method has better performance at different observation ratios. The ablation study verifies that the proposed feature transfer strategy can significantly improve the accuracy of early action prediction. • A feature transfer strategy is defined to predict late features from early features. • A new end-to-end late feature supplement-based early action prediction network is built in this work. • The proposed feature transfer strategy can significantly improve the accuracy of early action prediction. • The proposed method has better performance than the existing methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.