Abstract
An algorithm called adaptive-neural-intention estimator (ANIE) is presented to infer the intent of a human operator’s arm movements based on the observations from a 3-D camera sensor (Microsoft Kinect). Intentions are modeled as the goal locations of reaching motions in 3-D space. Human arm’s nonlinear motion dynamics are modeled using an unknown nonlinear function with intentions represented as parameters. The unknown model is learned by using a neural network. Based on the learned model, an approximate expectation-maximization algorithm is developed to infer human intentions. Furthermore, an identifier-based online model learning algorithm is developed to adapt to the variations in the arm motion dynamics, the motion trajectory, the goal locations, and the initial conditions of different human subjects. The results of experiments conducted on data obtained from different users performing a variety of reaching motions are presented. The ANIE algorithm is compared with an unsupervised Gaussian mixture model algorithm and an Euclidean distance-based approach by using Cornell’s CAD-120 data set and data collected in the Robotics and Controls Laboratoy at UConn. The ANIE algorithm is compared with the inverse LQR and ATCRF algorithms using a labeling task carried out on the CAD-120 data set.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Automation Science and Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.