Abstract

The nexus of robotics, autonomous systems, and artificial intelligence (AI) has the potential to change the nature of human guided exploration of indoor and outdoor spaces. Such autonomous mobile robots can be incorporated into a variety of applications, ranging from logistics and maintenance, to intelligence gathering, surveillance, and reconnaissance (ISR). One such example is that of a tele-operator using the robot to generate a map of the inside of a building while discovering and tagging the objects of interest. During this process, the tele-operator can also assign an area for the robot to navigate autonomously or return to a previously marked area/object of interest. Search and rescue and ISR abilities could be immensely improved with such capabilities. The goal of this research is to prototype and demonstrate the above autonomous capabilities in a mobile ground robot called Explorer51. Objectives include: (i) enabling an operator to drive the robot non-line of sight to explore a space by incorporating a first-person view (FPV) system to stream data from the robot to the base station; (ii) implementing automatic collision avoidance to prevent the operator from running the robot into obstacles; (iii) creating and saving 2D and 3D maps of the space in real time by using a 2D laser scanner, tracking, and depth/RGB cameras; (iv) locating and tagging objects of interest as waypoints within the map; (v) autonomously navigate within the map to reach a chosen waypoint.To accomplish these goals, we are using the AION Robotics R1 Unmanned Ground Vehicle (UGV) rover as the platform for Explorer51 to demonstrate the autonomous features. The rover runs the Robot Operating System (ROS) onboard an NVIDIA Jetson TX2 board, connected to a Pixhawk controller. Sensors include a 2D scanning LiDAR, depth camera, tracking camera, and an IMU. Using existing ROS packages such as Cartographer and TEB planner, we plan to implement ROS nodes for accomplishing these tasks. We plan to extend the mapping ability of the rover using Visual Inertial Odometry (VIO) using the cameras. In addition, we will explore the implementation of additional features such as autonomous target identification, waypoint marking, collision avoidance, and iterative trajectory optimization. The project will culminate in a series of demonstrations to showcase the autonomous navigation, and tele-operation abilities of the robot. Success will be evaluated based on ease of use by the tele-operator, collision avoidance ability, autonomous waypoint navigation accuracy, and robust map creation at high driving speeds.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call