Abstract

Imagine that hundreds of video streams, taken by mobile phones during a rock concert, are uploaded to a server. One attractive application of such prominent dataset is to allow a user to create his own video with a deliberately chosen but virtual camera trajectory. In this paper we present algorithms for the main sub-tasks (spatial calibration, image interpolation) related to this problem. Calibration: Spatial calibration of individual video streams is one of the most basic tasks related to creating such a video. At its core, this requires to estimate the pairwise relative geometry of images taken by different cameras. It is also known as the relative pose problem [1], and is fundamental to many computer vision algorithms. In practice, efficiency and robustness are of highest relevance for big data applications such as the ones addressed in the EU-FET_SME project SceneNet. In this paper, we present an improved algorithm that exploits additional data from inertial sensors, such as accelerometer, magnetometer or gyroscopes, which by now are available in most mobile phones. Experimental results on synthetic and real data demonstrate the accuracy and efficiency of our algorithm. Interpolation: Given the calibrated cameras, we present a second algorithm that generates novel synthetic images along a predefined specific camera trajectory. Each frame is produced from two “neighboring” video streams that are selected from the data base. The interpolation algorithm is then based on the point cloud reconstructed in the spatial calibration phase and iteratively projects triangular patches from the existing images into the new view. We present convincing images synthesized with the proposed algorithm.

Highlights

  • IntroductionIf you visited a rock concert recently, or any other event that attracts crowds, you probably recognized how

  • If you visited a rock concert recently, or any other event that attracts crowds, you probably recognized howHow to cite this paper: Egozi, A., Eilot, D., Maass, P. and Sagiv, C. (2015) A Robust Estimation Method for Camera Calibration with Known Rotation

  • We propose a geometry-based approach that uses, except from the available images, the 3D structure, which has been generated in the spatial calibration stage

Read more

Summary

Introduction

If you visited a rock concert recently, or any other event that attracts crowds, you probably recognized how. Combining these video streams potentially allows viewing the scene from arbitrary angles or creating a new video with an artificially designed camera trajectory This is one of the challenges of SceneNet, which aims to develop software for aggregating such audio-visual recordings of public events, in order to create multi-view high quality video sequences. The number of sampling subsets that are needed, to get a valid set, is exponential with the subset’s cardinality These days, it is common to use mobile cameras that have built-in sensors such as accelerometer, compass and gyros. Given the spatially calibrated cameras, the ability to interactively control the viewpoint while watching a video, is an exciting application This poses an additional challenge, i.e. an efficient image interpolation algorithm is required in order to obtain free viewpoint video.

Complexity Analysis
Spatial Registration
Virtual Camera
Relative Pose Estimation for the Case of Known Rotation Matrix
Mathematical Foundation
Epipolar Geometr
Relative Orientation from Sensor Measurements
Image Interpolation for Virtual Camera
Triangle Warp
Min-Cut Blending
Experimental Results
Synthetic Data
Real Data
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call