Abstract

MoCA is a bi-modal dataset in which we collect Motion Capture data and video sequences acquired from multiple views, including an ego-like viewpoint, of upper body actions in a cooking scenario. It has been collected with the specific purpose of investigating view-invariant action properties in both biological and artificial systems. Besides that, it represents an ideal test bed for research in a number of fields – including cognitive science and artificial vision – and application domains – as motor control and robotics. Compared to other benchmarks available, MoCA provides a unique compromise for research communities leveraging very different approaches to data gathering: from one extreme of action recognition in the wild – the standard practice nowadays in the fields of Computer Vision and Machine Learning – to motion analysis in very controlled scenarios – as for motor control in biomedical applications. In this work we introduce the dataset and its peculiarities, and discuss a baseline analysis as well as examples of applications for which the dataset is well suited.

Highlights

  • Background & SummaryThe Multiview Cooking Actions dataset (MoCa) is a bi-modal dataset acquired to understand motion recognition skills and view-invariance properties of both biological and artificial perceptual systems.Unlike other recently proposed datasets, where actions and activities are observed in highly unconstrained scenarios[1,2], our dataset has been acquired in a set-up designed to achieve a compromise between precision and naturalness of the movement

  • Our dataset provides a collection of daily life activities, which could serve both in the context of action recognition from the robot camera and in the perspective of generating appropriate robot motions

  • We identified meaningful cut points of the marker position along the most significant axis with respect to the MoCap reference system, marking the end of the portion associated with an action instance

Read more

Summary

Background & Summary

The Multiview Cooking Actions dataset (MoCa) is a bi-modal dataset acquired to understand motion recognition skills and view-invariance properties of both biological and artificial perceptual systems. Unlike other recently proposed datasets, where actions and activities are observed in highly unconstrained scenarios[1,2], our dataset has been acquired in a set-up designed to achieve a compromise between precision and naturalness of the movement Such properties make our dataset an ideal test bed for a number of fields and related research questions, among which it is worth mentioning the following:. Collaborative robotics, where a fast comprehension of what the partner is doing and when it is the right moment to act is a fundamental ability In this respect, our dataset provides a collection of daily life activities, which could serve both in the context of action recognition from the robot camera and in the perspective of generating appropriate robot motions

V CVV 5 V 3 V
Methods
Code availability

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.