Abstract

Advances in computing, networking, and multimedia technologies have led to a tremendous growth of sports video content and accelerated the need of analysis and understanding of sports video content. Sports video analysis has been a hot research area and a number of potential applications have been identified. In this paper, we summarize our research achievement on semantics extraction and automatic editorial content creation and adaptation in sports video analysis. We first propose a generic multi-layer and multi-modal framework for sports video analysis. Then we introduce several mid-level audio/visual features which are able to bridge the semantic gap between low-level features and high-level understanding. We also discuss emerging applications on editorial content creation and content enhancement/adaptation in sports video analysis, including event detection, sports MTV generation, automatic broadcast video generation, tactic analysis, player action recognition, virtual content insertion, and mobile sports video adaptation. Finally, we identify future directions in terms of research challenges remained and real applications expected.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call