This paper examines a couple of realizations of autonomous landing hazard avoidance technology of parafoil: a reinforcement-learning-based approach and a rule-based approach, advocating the former. Furthermore, comparative advantages and behavioral analogies between the two approaches are presented. In the data-driven approach, a decision process observing only a series of nadir-pointing images is designed without explicit augmentation of vehicle dynamics for the homogeneity of observation data. An agent then learns the hazard avoidance steering law in an end-to-end fashion. On the contrary, the rule-based approach is facilitated via explicit notions of guidance-control hierarchy, vehicle dynamic states, and metric details of ground obstacles. The soft actor–critic method is applied to learn a policy that maps the down-looking images to parafoil brakes, whereas a vector field guidance law is employed in the rule-based approach, considering each hazard as a repulsive source. This paper then presents empirical equivalences in designing both approaches and their distinctions. Numerical experiments in multiple test cases validate the reinforcement learning method and present comparisons between the approaches regarding their resultant trajectories. The interesting behaviors of the resultant policy of the data-driven approach are emphasized.
Read full abstract