Assistive robots have a great potential to address issues related to an ageing population and an increased demand for caregiving. Successful deployment of robots working in close proximity with people requires consideration of both safety and human-robot interaction. One of the established activities of daily living where robots could play an assistive role is dressing. Using the correct force profile for robot control will be essential in this application of human-robot interaction requiring careful exploration of factors related to the user’s pose and the type of garments involved. In this paper a Baxter robot was used to dress a jacket onto a mannequin and human participants considering several combinations of user pose and clothing type (base layers), whilst recording dynamic data from the robot, a load cell and an IMU. We also report on suitability of these sensors for identifying dressing errors, e.g. fabric snagging. Data was analyzed by comparing the overlap of confidence intervals to determine sensitivity to dressing. We expand the analysis to include classification techniques such as decision tree and support vector machines using k-fold cross-validation. The 6-axis load cell successfully discriminated between clothing types with predictive model accuracies between 72-97%. Used independently, the IMU and Baxter sensors were insufficient to discriminate garment types with the IMU showing 40-72% accuracy, but when used in combination this pair of sensors achieved an accuracy similar to the more expensive load cell (98%). When observing dressing errors (snagging) Baxter’s sensors and the IMU data demonstrated poor sensitivity but applying machine learning methods resulted in model with high predicative accuracy and low false negative rates (≤5%). The results show that the load cell could be used independently for this application with good accuracy but a combination of the lower cost sensors could also be used without a significant loss in precision which will be a key element in the robot control architecture for safe human-robot interaction.
Read full abstract