Abstract

Multi-task learning (MTL) is a machine learning method to share knowledge for multiple related machine learning tasks via learning those tasks jointly. It has been shown to be capable of effectively improving the generalization capability of each single task (learning just one task at a time). In this paper, we propose a novel MTL architecture that first combines 3D convolutional neural networks (3D CNN) plus the long short-term memory (LSTM) networks together with the MTL mechanism, tailored to information sharing of video inputs. We split each video into several clips and apply the hybrid deep model of 3D CNN and LSTM to extract the sequential features of those video clips. Therefore, our MTL model can share visual knowledge based on those video-clip features among different categories more efficiently. We evaluate our method on three popular public action recognition video datasets. The experimental results show that our novel MTL method can efficiently share detailed information in video clips among multiple action categories and outperforms other multi-task methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.