Abstract

3D LiDAR sensors can provide 3D point clouds of the environment, and are widely used in automobile navigation; while 2D LiDAR sensors can only provide point cloud in a 2D sweeping plane, and then are only used for navigating robots of small height, e.g., floor mopping robots. In this letter, we propose a simple yet effective deep reinforcement learning (DRL) method with our self-state-attention unit and give a solution that can use low-cost devices (i.e., a 2D LiDAR sensor and a monocular camera) to navigate a tall mobile robot of one meter height. The overrall pipeline is that we (1) infer the dense depth information of RGB images with the aid of the 2D LiDAR sensor data (i.e., point clouds in a plane with fixed height), (2) further filter the dense depth map into a 2D minimal depth data and fuse with 2D LiDAR data, and (3) make use of DRL module with our self-state-attention unit to a partially observable sequential decision making problem that can deal with partially accurate data. We present a novel DRL training scheme for robot navigation, proposing a concise and effective self-state-attention unit and proving that applying this unit can replace multi-stage training, achieve better results and generalization capability. Experiments on both simulated data and a real robot show that our method can perform efficient collision avoidance only using low-cost 2D LiDAR sensor and monocular camera.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call