Abstract

This work proposes a video understanding technique that primarily focuses on the individual action recognition appearing in the video. The state-of-the-art showed promising work in video understanding. Though, it's essential to require inclusive information on human action in real-time CCTV video surveillance, sports video analysis, health care, etc. This paper proposed a transfer learning deep neural network model designed for recognizing individual actions accomplished by multiple people in a video sequence. This research established a deep model which uses Region-Of-Interest (RoI) pooling layer to capture automated features from a specified video frame to recognize individual actions. The MobileNet model accomplishes this as the backbone to recognize individual actions from each video frame. The accuracy score of the model was compared with the CNN models VGG-19,InceptionV3, and MobileNet. The MobileNet is computationally low-cost and enhances the performance of individual action recognition performed by multiple humans in a video frame. The investigational results were evaluated by varying learning parameters, and optimizer of deep neural network. The experimental results of the proposed model for individual action recognition demonstrate the improved efficiency of the standard benchmark collective activity dataset. This research illustrates the progress of action recognition by employing the transfer learning CNN model along with RoI pooling layer.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.