Abstract

The use of information technology has currently resulted in a change in people's production and lifestyles as well as a catalyst for the sports industry's evolution. Currently, there is a growing use of digitization in the sports industry's basketball realm. Using 2D posture estimation, a novel lightweight deep learning architecture (LDLA) is built to accomplish the automatic analysis of basketball game footage. We offer a real-time method to identify numerous people's 2D poses in a video. Additionally, it examines how 2D pose estimation might be used to analyse basketball shooting videos. First, group and global motion features are extracted to represent semantic events. A full basketball game video is broken down into three stages: clip-based segmentation, classification of semantics occurrences utilizing audio and video, and temporal sequence characteristics employing GRU CNN. Video is first processed to create some basic video categorization and segmentation using the visual, movement, and auditory data. Further use of domain knowledge is made to find important events in the basketball video. Shot and image threshold recognition algorithms for videos have used both optical and kinematic prediction information; scene classification comes next. The placements of probable semantic events, such as "fouling" and "shooting at the basket," are then discovered by comparing the multidimensional data with supplementary domain expertise. Experimental results demonstrate the proposed LDLA method achieves 99.6% of accuracy, 93.2% of precision, 93.5% of recall and 89.5% of F1-score.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call