Abstract

Multicamera calibration is an important technique for generating free-view videos. By arranging multiple cameras in a scene after camera calibration and image processing, a multidimensional viewing experience can be presented to the audience. To address the problem that low texture cannot be robustly self-calibrated in common sports scenes when placing artificial markers or towers in the calibration process is impractical, this article proposes a robust multicamera calibration method based on sequence feature matching and fusion. Additionally, to validate the effectiveness of the proposed calibration algorithm, a virtual axis fast bullet-time synthesis algorithm is proposed for generating a free-view video. First, camera self-calibration is performed in low-texture situations by fusing dynamic objects in time series to enrich geometric constraints in scenes without the use of calibration panels or additional artificial markers. Second, a virtual-axis bullet-time video synthesis method based on the calibration result is proposed. In the calibrated multicamera scenario, a fast bullet-time video is generated by constructing a virtual axis. Qualitative and quantitative experiments in comparison with a state-of-the-art calibration method demonstrate the validity and robustness of the proposed calibration algorithm for free-view video synthesis tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.