Abstract

Human–robot interaction is a vital part of human–robot collaborative space exploration, which bridges the high-level decision and path planning intelligence of human and the accurate sensing and modelling ability of the robot. However, most conventional human–robot interaction approaches rely on video streams for the operator to understand the robot’s surrounding, which lacks situational awareness and force the operator to be stressed and fatigued. This research aims to improve efficiency and promote the natural level of interaction for human–robot collaboration. We present a human–robot interaction method based on real-time mapping and online virtual reality visualization, which is implemented and verified for rescue robotics. At the robot side, a dense point cloud map is built in real-time by LiDAR-IMU tightly fusion; the resulting map is further transformed into three-dimensional normal distributions transform representation. Wireless communication is employed to transmit the three-dimensional normal distributions transform map to the remote control station in an incremental manner. At the remote control station, the received map is rendered in virtual reality using parameterized ellipsoid cells. The operator controls the robot with three modes. In complex areas, the operator can use interactive devices to give low-level motion commands. In the less unstructured region, the operator can specify a path or even a target point. Afterwards, the robot follows the path or navigates to the target point autonomously. In other words, these two modes rely more on the robot’s autonomy. By virtue of virtual reality visualization, the operator can have a more comprehensive understanding of the space to be explored. In this case, the high-level decision and path planning intelligence of human and the accurate sensing and modelling ability of the robot can be well integrated as a whole. Although the method is proposed for rescue robots, it can also be used in other out-of-sight teleoperation-based human–robot collaboration systems, including but not limited to manufacturing, space, undersea, surgery, agriculture and military operations.

Highlights

  • At the remote control station, the received map is rendered in virtual reality (VR) using parameterized ellipsoid cells

  • We will present a novel human–robot interaction (HRI) method, which is built upon real-time robotic mapping and online VR visualization, considering being applied to human and rescue robot collaborative space exploration

  • A novel HRI method is proposed for collaborative human– robot space exploration, which is built upon real-time robotic mapping and online VR visualization

Read more

Summary

Introduction

As a kind of service robot, rescue robots are demanded to search and save victims in various disasters, for example, mining accidents, terrorist attacks, earthquakes, city conflagration and explosions, bringing the benefits of reduced. Rescue robots usually face unstructured complex environments, which require good locomotion abilities When they entered the field, rescue robots act as the extension of human beings for both perception and operation. They are usually rigged with multiple sensors and effectors, which enable them to explore the space, for example, map the surrounding, search victims and determine their life status and location. We present a human–robot interaction (HRI) method based on real-time mapping and online virtual reality (VR) visualization for collaborative human– robot space exploration. 1. Proposed an HRI method based on real-time robotic mapping and online VR visualization, which is implemented and evaluated employing our rescue robot. The seventh section concludes this article and discusses future work

Related work
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call