The automation of production lines in industrial scenarios implies solving different problems, such as the flexibility to deploy robotic solutions to different production lines, usability to allow non-robotics expert users to teach robots different tasks, and safety to enable operators to physically interact with robots without the need of fences. In this paper, we present a system that integrates three novel technologies to address the above mentioned problems. We use an autocalibrated multi-modal robot skin , a general robot control framework to generate dynamic behaviors fusing multiple sensor signals, and an intuitive and fast teaching by demonstration method based on semantic reasoning. We validate the proposed technologies with a wheeled humanoid robot in an industrial set-up. The benefits of our system are the transferability of the learned tasks to different robots, the reusability of the models when new objects are introduced in the production line, the capability of detecting and recovering from errors, and the reliable detection of collisions and pre-collisions to provide a fast reactive robot that improves the physical human-robot interaction.
Read full abstract