Abstract
We focus on the learning prediction problems in reinforcement learning with linear function approximation. In particular, the l 1 -regularized problems in least-squares temporal difference with gradient correction (LS-TDC) are studied. Since LS-TDC contains gradient correction term, the convergence rate of LS-TDC is higher than that of least-squares temporal difference (LS-TD) algorithm. However, LS-TDC may over-fit to data as LS-TD does when the number of features is larger than that of samples. Thus, the regularization and feature selection of LS-TDC are studied. It is well known that l 1 -regularization can produce sparse solutions and often serves as an automatic feature selection method in value function approximation. The l 1 -regularized problem in LS-TDC adds a penalty term into the fixed-point function, but this augment function cannot be solved analytically. We turn to build the optimal solution incrementally by using an algorithm similar to Least Angle Regression (LARS) algorithm and LARS-TD algorithm. By using LARS algorithm, an l 1 -regularized version of LS-TDC named LARS-TDC is proposed. Experiment results show that LARS-TDC is an effective method to solve the l 1 -regularized problem.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.