Abstract

This paper proposes a free-viewpoint interface for mobile-robot teleoperation, which provides viewpoints that are freely congurable by the human operator head pose. The viewpoints are acquired by a head tracker equipped on a head mounted display. A real-time free-viewpoint image generation method based on view-dependent geometry and texture is employed by the interface to synthesize the scene presented to the operator. In addition, a computer graphics model of the robot is superimposed on the free-viewpoint images using an augmented reality technique. We developed a prototype system based on the proposed interface using an omnidirectional camera and depth cameras for experiments. The experiments under both virtual and physical environments demonstrated that the proposed interface can improve the accuracy of the robot operation compared with rst- and third-person view interfaces, while the quality of the free-viewpoint images generated by the prototype system was satisfactory for expressing the potential advantages on operational accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call