Abstract

The PbGA-DDPG algorithm, which uses a potential-based GA-optimized reward shaping function, is a versatiledeep reinforcement learning/DRLagent that can control a vehicle in a complex environment without prior knowledge. However, when compared to an established deterministic controller, it consistently falls short in terms of landing distance accuracy. To address this issue, the HYDESTOC Hybrid Deterministic-Stochastic (a combination of DDPG/deep deterministic policy gradient and PID/proportional-integral-derivative) algorithm was introduced to improve terminal distance accuracy while keeping propellant consumption low. Results from extensive cross-validated Monte Carlo simulations show that a miss distance of less than 0.02 meters, landing speed of less than 0.4 m/s, settling time of 20 seconds or fewer, and a constant crash-free performance is achievable using this method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call