Abstract

To enable us to select only the specific scenes that we want to watch in a baseball video and personalize its highlights sub-video, we require an Automatic Baseball Video Tagging system that can divide a baseball video into multiple sub-videos per at-bat scene automatically and append tag information relevant to at-bat scenes. Towards developing the system, the previous papers proposed several Tagging algorithms using ball-by-ball textual reports and voice recognition, and tried to refine models for baseball games. To improve its robustness, this paper proposes a novel Tagging method that utilizes multiple kinds of play-by-play comment patterns for voice recognition which represent the situation of at-bat scenes and take their “Priority” into account. In addition, to search for a voice-recognized play-by-play comment on the start/end of at-bat scenes, this paper proposes a novel modelling method called as “Local Modelling,” as well as Global Modelling used by the previous papers.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.