Abstract

Due to the huge number of online videos uploaded and viewed every day, there is an emerging need nowadays for the action recognition techniques. Applying these techniques in uncontrolled and realistic videos is still a challenging task, considering the large variations in camera motion, viewpoint, cluttered background etc. Moreover, they need to be automated to be able to handle such an amount of different actions. The goal of this study is to introduce a new technique for mining mid‐level discriminative patches from videos. These patches are the most representative parts that can describe an action. To achieve this goal, the authors generalise a technique borrowed from 2D images to generate bounding boxes with a high motion and appearance saliencies. Then, a clustering‐classification iterative procedure is applied on the generated boxes. Then, they calculate a discriminative score for each box. Finally, they select top ranked boxes to train exemplar‐SVM on low‐level features which are extracted from the selected boxes. The proposed approach has been evaluated using two challenging datasets YouTube and JHMDB. The experimental results demonstrated the effectiveness of their approach to achieve a better average recognition accuracy than the state‐of‐the‐art techniques.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.