Abstract

Abstract It proposes a programmed human movement investigation and movement acknowledgement for serious video test programs. Four significant human focuses are recognized and used by powerful camera estimation and target localization methods to calculate human contour tracking. Statistical analysis of tracking point movements finds time divisions for upward runs and jump levels. This method uses a motion image test from camera movement, and the athlete's performance characteristics are robust and independent of pole vaulting, high jumps, triple jumps and long jumps. Experimental results are sufficient to show that the program is running with complex content and motion sequences. The proposed Extended Convolutional Neural Network (RCNN) architecture contains two main modules. First, four significant people are measured and tracked using pre-calculated silhouettes of people. It is calculated using standard algorithms to detect silhouettes and find moving objects in the video. The algorithm's necessary steps are camera motion estimation, change detection based on Bayesian statistics, and label propagation. The long human axis, upper running and jumping stages of the walk cycle and time division are estimated using a statistical analysis of the tracking point movement. In the second module, the above features are used for behavior recognition tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call