Abstract

Traditional self-similarity matrices (SSMs) show outstanding performance in multi-view human action recognition except when the view change becomes large enough. To address this problem, a joint dictionary learning (JDL) algorithm based on joint sparse constraint is presented for trading off the contribution to the sparse features from different views. Unfortunately, the dictionaries and classifiers are trained separately. To enhance the performance of the JDL method, the dictionaries and classifiers can be trained simultaneously in this paper. A task-driven joint dictionary learning model (TJDL) is formulated under the joint sparse constraint. In TJDL, the view-shared dictionary, view-specific dictionaries, linear transformation matrices, action classifiers, view-shared sparse codes, and view-specific sparse codes are learned jointly by a coordinate descent algorithm. Finally, the experimental results over three benchmark datasets show that the proposed TJDL algorithm can achieve superior performance, compared to the recent state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call