Despite the existence of robots that can physically lift heavy loads, robots that can collaborate with people to move heavy objects are not readily available. This article makes progress toward effective human-robot co-manipulation by studying 30 human-human dyads that collaboratively manipulated an object weighing \(27 \mathrm{kg}\) without being co-located (i.e., participants were at either end of the extended object). Participants maneuvered around different obstacles with the object while exhibiting one of four modi–the manner or objective with which a team moves an object together–at any given time. Using force and motion signals to classify modus or behavior was the primary objective of this work. Our results showed that two of the originally proposed modi were very similar, such that one could effectively be removed while still spanning the space of common behaviors during our co-manipulation tasks. The three modi used in classification were quickly , smoothly and avoiding obstacles . Using a deep convolutional neural network (CNN), we classified three modi with up to 89% accuracy from a validation set. The capability to detect or classify modus during co-manipulation has the potential to greatly improve human-robot performance by helping to define appropriate robot behavior or controller parameters depending on the objective or modus of the team.
Read full abstract