Rethinking Temporal Self-Similarity For Repetitive Action Counting

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Counting repetitive actions in long untrimmed videos is a challenging task that has many applications such as rehabilitation. State-of-the-art methods predict action counts by first generating a temporal self-similarity matrix (TSM) from the sampled frames and then feeding the matrix to a predictor network. The self-similarity matrix, however, is not an optimal input to a network since it discards too much information from the frame-wise embeddings. We thus rethink how a TSM can be utilized for counting repetitive actions and propose a framework that learns embeddings and predicts action start probabilities at full temporal resolution. The number of repeated actions is then inferred from the action start probabilities. In contrast to current approaches that have the TSM as an intermediate representation, we propose a novel loss based on a generated reference TSM, which enforces that the self-similarity of the learned frame-wise embeddings is consistent with the self-similarity of repeated actions. The proposed framework achieves state-of-the-art results on three datasets, i.e., RepCount, UCFRep, and Countix.

Similar Papers
  • Research Article
  • Cite Count Icon 3
  • 10.1109/tcsvt.2024.3402728
Joint-Wise Temporal Self-Similarity Periodic Selection Network for Repetitive Fitness Action Counting
  • Oct 1, 2024
  • IEEE Transactions on Circuits and Systems for Video Technology
  • Hu Huang + 3 more

Accurate repetitive action counting has crucial applications in the era of AI-assisted universal fitness. Existing methods are prone to large errors in spatially fine-grained action counting scenarios. In this study, we propose a joint-wise temporal self-similarity periodic selection network (JTSPS-Net) with a human skeleton as its input. Periodic knowledge is embedded in skeleton joint units and selected in a coarse-to-fine manner to focus on the temporal repetition that occurs in the local space. The proposed JTSPS-Net adopts a temporal multiscale fusion strategy to better handle videos with various lengths. To maintain the interpretability of the model, we design an impulse map regression module that uses one random frame per action unit as its labels. Furthermore, to fill the action counting gap in real physical fitness scenarios and to scale up the current repetition count dataset, we construct a high-quality dataset named FitnessRep, which consists of 2,110 fitness videos collected in realistic scenarios. Experiments demonstrate that the proposed JTSPS-Net outperforms the state-of-the-art approach on our dataset and two other public datasets, especially on fine-grained action samples. In addition, it has a good ability to generalize to repetitive actions belonging to unseen categories.

Save Icon
Up Arrow
Open/Close