Abstract

Convergence problems can occur in some practical situations when using Gaussian Mixture Model (GMM) based robot Learning from Demonstration (LfD). Theoretically, Expectation Maximization (EM) is a good technique for the estimation of parameters for GMM, but can suffer problems when used in a practical situation. The contribution of this paper is a more complete analysis of the theoretical problem which arise in a particular experiment. The research question that is answered in this paper is how can a partial solution be found for such practical problem. Simulation results and practical results for laboratory experiments verify the theoretical analysis. The two issues covered are repeated sampling on other models and the influence of outliers (abnormal data) on the policy/kernel generation in GMM LfD. Moreover, an analysis of the impact of repeated samples to the CHMM, and experimental results are also presented.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call