Abstract

Complex action recognition is an important yet challenging problem because of uncontrolled scenes, such as partial occlusions, viewpoint changes and dynamically changing backgrounds. Meanwhile obtaining a robust and performance model requires sufficient labeled training data, and it is hard to obtain. Due to the fact that each complex action is composed of a sequence of simple actions, they can share the common dictionary. To utilize such decomposition form, we build a simple-action-guided dictionary learning model (SAG-DLM) for complex action recognition. Especially, a common dictionary is learned by simple actions to model action-shared features, and we will transfer the common dictionary to help complex action learning. Further the difference complex action dictionary is studied to better obtain sparse representation. Finally, the complex actions are reconstructed by the common dictionary and the difference complex action dictionary. We validate the proposed SAG-DLM on two complex datasets: Olympic Sports dataset and UCF50 dataset. Extensive experiments prove that the effectiveness of the proposed SAG-DLM, and the learned common dictionary can provide promising improvement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call