Abstract

Robots that can assist in the activities of daily living, such as dressing, may support older adults, addressing the needs of an aging population in the face of a growing shortage of care professionals. Using depth cameras during robot-assisted dressing can lead to occlusions and loss of user tracking, which may result in unsafe trajectory planning or prevent the planning task proceeding altogether. For the dressing task of putting on a jacket, which is addressed in this letter, tracking of the arm is lost when the user's hand enters the jacket, which may lead to unsafe situations for the user and a poor interaction experience. Using motion tracking data, free from occlusions, gathered from a human–human interaction study on an assisted dressing task, recurrent neural network models were built to predict the elbow position of a single arm based on other features of the user pose. The best features for predicting the elbow position were explored by using regression trees indicating the hips and shoulder as possible predictors. Engineered features were also created based on observations of real dressing scenarios and their effectiveness explored. Comparison between position and orientation-based datasets was also included in this study. A 12-fold cross-validation was performed for each feature set and repeated 20 times to improve statistical power. Using position-based data, the elbow position could be predicted with a 4.1 cm error but adding engineered features reduced the error to 2.4 cm. Adding orientation information to the data did not improve the accuracy and aggregating univariate response models failed to make significant improvements. The model was evaluated on Kinect data for a robot dressing task and although not without issues, demonstrates potential for this application. Although this has been demonstrated for jacket dressing, the technique could be applied to a number of different situations during occluded tracking.

Highlights

  • R OBOTS that are capable of assisting humans in activities of daily living (ADL) may become a valuable resource in an aging population

  • The main contributions of this letter are the identification of features that can be used to predict the user’s elbow position, a neural network topology that can be used for prediction and a method on how this might be achieved in a real dressing scenario with a Kinect camera

  • Using a Python environment in Jupyter Notebooks, Scikitlearn [24] was used alongside Keras [25] using TensorFlow [26] as a backend to train a Long Short-Term Memory (LSTM) network for predicting the left elbow position given the feature sets explored above

Read more

Summary

INTRODUCTION

R OBOTS that are capable of assisting humans in activities of daily living (ADL) may become a valuable resource in an aging population. In this letter we explore the problem of arm tracking when a robot equipped with a depth camera is assisting a person with jacket dressing, resulting in garment-occlusion and how this can be restored using predictive neural-network models. Self-occlusion may occur during robot-assisted dressing if the user turns away from the camera preventing line-of-sight of the camera to the tracked limb. Robot-occlusion typically occurs when the robot intersects the line-of-sight of the depth camera to the user, and is another important area for dressing but is not explored in this work. This is important if the camera is located on the robot and the robot arms may move in. The main contributions of this letter are the identification of features that can be used to predict the user’s elbow position, a neural network topology that can be used for prediction and a method on how this might be achieved in a real dressing scenario with a Kinect camera

PROBLEM AND HYPOTHESIS
The Elbow Marker
RELATED WORK
METHOD
Feature Selection - Multivariate Response
Features for Univariate Response - Single Axis Position
Engineered Features
Feature Sets
PREDICTIVE MODELLING
Cross-Validation
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call