Abstract

Exploring the unknown environment is a very crucial task where human life is at risks like search and rescue operations, abandoned nuclear plants, covert operations and more. Autonomous robots could serve this task efficiently. The existing methods use uncertainty models for localization and map building to explore the unknown areas requiring high onboard computation and time. We propose to use Deep Reinforcement Learning (DRL) for the autonomous exploration of unknown environments. In DRL, the agent interacts with the environment and learns based on experiences (feedback/reward). We propose extrinsic and curiosity-driven reward functions to explore the environment. The curiosity-based reward function motivates the agent to explore unseen areas by predicting future states, while the extrinsic reward function avoids collisions. We train the differential drive robot in one environment and evaluate its performance in another unknown environment. We observe curiosity-driven reward function outperformed the extrinsic reward by exploring more areas in the unknown environment. The test results show the generalization capability to explore unknown environments with the proposed methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call