Abstract

This paper proposes a method for synthesizing virtual views based on two real views from different perspectives. This method is also readily applicable to video synthesis. This method can quickly generate new virtual views and a smooth video a view transition without camera calibration or depth map. In this method, we first extract the corresponding feature points from the real views by using the SFT algorithm. Secondly, we build a virtual multi-camera model. Then we calculate the coordinates of feature points in each virtual perspective, and project the real views onto this virtual perspective. Finally, the virtual views are synthesized. This method can be applied to most real scenes such as indoor and street scenes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call