Abstract

This paper considers the problem of prolonged occlusions on navigation sensors due to dust, smudges, soils, etc. Such uncontrollable occlusions often cause lower visibility as well as higher uncertainty that require considerably sophisticated behavior. To secure visibility (i.e., confidence about the world), we propose a confidence-based navigation method that encourages the robot to explore the uncertain region around the robot maximizing its local confidence. To effectively extract features from the variable size of sensor occlusions, we adopt a point-cloud based representation network. Our method returns a resilient navigation policy via deep reinforcement learning, autonomously avoiding collisions under sensor occlusions while reaching a goal. We evaluate our method in simulated and real-world environments with either static or dynamic obstacles under various sensor-occlusion scenarios. The experimental result shows that our method outperforms baseline methods under the highly occurring sensor occlusion, and achieves maximum 90% and 80% success rates in the tested static and dynamic environments, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call