Abstract

This paper uses data analysis and action recognition algorithms to conduct in-depth research and analysis of professional sports competition judging and designs a professional sports competition-assisted judging system for use in actual judging. In this paper, a wearable motion capture system based on an inertial sensing unit is developed and designed for kayaking technical motion monitoring to achieve the acquisition, analysis, and quantitative evaluation of kayaker motion data. To limit the gyroscope and pose estimation error, a gradient descent method is used for multisensor data fusion to achieve athlete pose update, and a quaternion-driven human skeletal vector model is proposed to reconstruct the kayaker’s paddling technical movements. By calculating the angular sequences of the left shoulder, right shoulder, left elbow, and right elbow joints of the athlete’s upper limbs and comparing them with the optical motion capture system, the results show that the motion capture system developed in this paper is comparable to the optical motion capture system in terms of measurement accuracy. It ultimately affects the result of pose estimation. Therefore, high-resolution networks and low-resolution networks can continuously maintain high-resolution features of the image by allowing each representation layer to repeatedly accept the representation information of other networks. A step matrix model is constructed to encode the multiscale global temporal information of action sequences, and action classification is achieved by calculating the response of the step matrix of test samples to the step matrix of each category of actions. The algorithm achieves 78.96%, 91.84%, and 91.18% accuracy of action classification on the Northwestern-UCLA database, MSRC-12 database, and CAD-60 database, respectively. The designed visual motion tracking system was applied to record the motion data of the experimental subjects in the fine motion assessment task and construct the motion assessment database. The experimental results show that the average error between the prediction results of the proposed action assessment method and the manual scoring is 1.83, and the automated assessment of fine movements is effectively realized.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call