Abstract
Abstract In this paper we present results from a study on the performance of humans and automatic controllers in a general remote navigation task. The remote navigation task is defined as driving a vehicle with nonholonomic kinematic constraints around obstacles toward a goal. We conducted experiments with humans and automatic controllers; in these experiments, the number and type of obstacles as well as the feedback delay was varied. Humans showed significantly more robust performance compared to that of a receding horizon controller. Using the human data, we then train a new human-like receding horizon controller which provides goal convergence when there is no uncertainty. We show that paths produced by the trained human-like controller are similar to human paths and that the trained controller improves robustness compared to the original receding horizon controller.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have