Abstract

There are two important aspects in human action recognition. The first one is how to locate the area that better indicates what the subjects in the videos are doing. The second one is how we can utilize the appearance and motion information from the video data. In this paper, we propose a gaze-assisted deep neural network, which performs the action recognition task with the help of human visual attention. Based on the above-mentioned consideration, we first collect a large number of human gaze data by recording the eye movements of human subjects when they watch the video. Then, we employ a fully convolutional network to learn to predict the human gaze. To efficiently utilize the human gaze, inspired by the rank pooling concept, which can encode the video into one image, we design a novel video representation named by dynamic gaze. The proposed dynamic gaze captures both the appearance and motion information from the video, and our human gaze data can better locate the area of interest. Based on the dynamic gaze, we build our dynamic gaze stream. We combine the proposed dynamic gaze stream together with the two-stream architecture as our final multi-stream architecture. We have collected over 300-k human gaze maps for the J-HMDB data set in this paper, and experiments show that the proposed multi-stream architecture can achieve comparable results with the state of the art in the task of action recognition with both collected human gaze data and predicted human gaze data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.