Abstract
In robot teleoperation, a lack of depth information often results in collisions between the robots and obstacles in its path or surroundings. To address this issue, free viewpoint images can greatly benefit the operators in terms of collision avoidance as the operators are able to view the robot’s surrounding from the images at arbitrary points, giving them a better depth information. In this paper, a novel free viewpoint image generation system is proposed. One approach to generate free viewpoint images is to use multiple cameras and Light Detection and Ranging (LiDAR). Instead of using the expensive LiDAR, this study utilizes a cost-effective laser rangefinder (LRF) and a characteristic of man-made environments. In other words, we install multiple fisheye cameras and an LRF on a robot. Free viewpoint images are generated under the assumption that walls are perpendicular to the floor. Furthermore, an easy calibration for estimating the poses of the multiple fisheye cameras, the LRF, and the robot model is proposed. Experimental results show that the proposed method can generate free viewpoint images using cameras and an LRF. Finally, the proposed method is primarily implemented using OpenGL Shading Language to utilize a graphics processing unit computation to achieve a real-time processing of the multiple high-resolution images. Supplementary videos and our source code are available at our project page (https://matsuren.github.io/fvp).
Highlights
Visualizing the surrounding environment of a robot is important for an efficient robot teleoperation
We masked out the robot-body region in the images before evaluations as we found out that the robot-body region has worse evaluation metrics, and the quality of the robot model is beyond the scope of this study
The images generated by the previous method [7], images taken from outside of the robot, and images generated by the proposed method are presented from the first row to the last row of the figure, respectively
Summary
Visualizing the surrounding environment of a robot is important for an efficient robot teleoperation. The images do not provide much depth information, which sometimes leads to collisions between the robots and obstacles. To address the issue of obstacle collision, Keyes et al investigated the relationship between camera positions and collisions of teleoperated robot [4]. They compared a forward-facing camera, which provides first-person view images, to an overhead camera, which provides third-person view images. From the comparison, they concluded that the third-person view images were more beneficial for obstacle avoidance as operators could see both the robot body itself and the obstacles
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.