Abstract

Endowing visual agents with predictive capability is a key step towards video intelligence at scale. Early action recognition aims to predict the action labels before fully observing the complete video frames. Unlike action recognition, the model is asked to forecast the future or the effects by only observing the initial few frames. The strong reasoning ability over the temporal dimension is the key to success. To this end, in this paper, we propose a novel recurrent network with decomposed space-time attention and higher-order design to capture the temporal dependency associated with the specific actions. Our method achieves state-of-the-art performance on Something-Something and EPIC-Kitchens datasets under the early action recognition setting, showing evidence of predictive capability that we attribute to our higher-order recurrent design with space-time attention.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call