Abstract

In the embodied visual navigation task, the agent navigates to a target location based on the visual observation it collects during the interaction with the environment. And various approaches have been proposed to learn robust navigation strategies for this task. However, existing approaches assume that the action spaces in the training and testing phases are the same, which is usually not the case in reality. And thus it is difficult to directly apply these approaches on practical scenarios. In this paper, we consider the situation where the action spaces in the training and testing phases are different, and a novel task of visual navigation subject to embodied mismatch is proposed. To solve the proposed task, we establish a two-stage robust adversary learning framework which can learn a robust policy to adapt the learned model to a new action space. In the first stage, an adversary training mechanism is used to learn a robust feature representation of the state. In the second stage, an adaptation training is used to transfer the learned strategy to a new action space with fewer training samples. Experiments of three types of embodied visual navigation tasks are conducted in 3D indoor scenes demonstrating the effectiveness of the proposed approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.