Abstract

The human ability to infer others' intent is innate and crucial to development. Machines ought to acquire this ability for seamless interaction with humans. In this article, we propose an agent model for predicting the intent of actors in human–human interactions. This requires <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">simultaneous generation and recognition of an interaction</i> at any time, for which end-to-end models are scarce. The proposed agent actively samples its environment via a sequence of glimpses. At each sampling instant, the model infers the observation class and completes the partially observed body motion. It learns the sequence of body locations to sample by jointly minimizing the classification and generation errors. The model is evaluated on videos of two-skeleton interactions under two settings: (first person) one skeleton is the modeled agent and the other skeleton's joint movements constitute its visual observation, and (third person) an audience is the modeled agent and the two interacting skeletons' joint movements constitute its visual observation. Three methods for implementing the attention mechanism are analyzed using benchmark datasets. One of them, where attention is driven by sensory prediction error, achieves the highest classification accuracy in both settings by sampling less than 50% of the skeleton joints, while also being the most efficient in terms of model size. This is the first known attention-based agent to learn end-to-end from two-person interactions for intent prediction, with high accuracy and efficiency.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.