Abstract

Autonomous radiation source detection has long been studied for radiation emergencies. Compared to conventional data-driven or path planning methods, deep reinforcement learning shows a strong capacity in source detection while still lacking the generalized ability to the geometry in unknown environments. In this work, the detection task is decomposed into two subtasks: exploration and localization. A hierarchical control policy (HC) is proposed to perform the subtasks at different stages. The low-level controller learns how to execute the individual subtasks by deep reinforcement learning, and the high-level controller determines which subtasks should be executed at the current stage. In experimental tests under different geometrical conditions, HC achieves the best performance among the autonomous decision policies. The robustness and generalized ability of the hierarchy have been demonstrated.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.