Abstract

In this paper, a mutual information (MI) and Extreme Learning Machine (ELM) based inverse reinforcement learning (IRL) algorithm, which termed as MEIRL, is proposed to construct nonlinear reward function. The basic idea of MIIRL is that, similar to GPIRL, the reward function is learned by using Gaussian process and the importance of each feature is obtained by using automatic relevance determination (ARD). Then mutual information is employed to evaluate the impact of each feature to the reward function, based on which extreme learning machine is introduced along with an adaptive model construction procedure to choose the optimal subset of features and the performance of the original GPIRL algorithm is enhanced as well. Furthermore, to demonstrate the effectiveness of MEIRL, a simulation called highway driving is constructed. The simulation results show that MEIRL is comparable with the state of art IRL algorithms in terms of generalization capability, but more efficient while the number of features is large.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call