Abstract

Summarization is important in sports video analysis; it gives a more compact and interesting representation of content. The automatic cricket video summarization is more challenging as it contains several rules and longer match duration. In this research, a hybrid machine learning approach is proposed to summarize cricket video. It analyzes the excitement, object, and event-based features for the detection of key events from the cricket video. First, the audio is analyzed for the extraction of the exciting clips by using an adaptive threshold, speech-to-text framework, and Stacked Gated Recurrent Neural Network with Attention Module (SGRNN-AM). Then, the scenes of each exciting clip are classified with a new Hybrid Rotation Forest Deep Belief Network (HRF-DBN). Next, the characters and action features are extracted from the scorecard region of each key frame and umpire frames of exciting clips. Finally, SGRNN-AM model is used to detect key events including fours, sixes, and wickets. The accuracy of the proposed SGRNN-AM video summarization model is increased with an attention module in the hidden outputs of Gated Recurrent Unit (GRU) for selecting the significant features. The performance of the suggested technique has been improved on various collections of cricket videos. It achieved a precision of $$96.82\ \%$$ and an accuracy of $$96.32\%$$ that proves its effectiveness.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.