Abstract

State estimation and localization for the autonomous vehicle are essential for accurate navigation and safe maneuvers. The commonly used method is Kalman filtering, but its performance is affected by the noise covariance. An inappropriate set value will decrease the estimation accuracy and even makes the filter diverge. The noise covariance estimation problem has long been considered a tough issue because there is too much uncertainty in where the noise comes from and therefore unable to model it systematically. In recent years, Deep Reinforcement Learning (DRL) has made astonishing progress and is an excellent choice for tackling the problem that cannot be solved by conventional techniques, such as parameter estimation. By finely abstracting the problem as an MDP, we can use the DRL methods to solve it without too many prior assumptions. We propose an adaptive covariance tuning method applied to the Error State Extend Kalman Filter by taking advantage of DRL, called Reinforcement Learning Aided Covariance Tuning. The preliminary experiment result indicates that our method achieves a 14.73% estimation accuracy improvement on average compared with the vanilla fixed-covariance method and bound the estimation error within 0.4 m.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call