Abstract

Autonomous, mobile cyber-physical systems are becoming popular in the transportation, manufacturing and industry sectors, thanks to their ability to provide smart and twenty-four-seven services to society, production industries and companies. A smart service is, for example, autonomous transportation of people in traffic or goods in manufacturing industries without human control.The use of autonomous, mobile cyber-physical systems has led to changes and challenges. Especially, dynamic obstacles are affecting the cyber-physical systems and these must be handled to provide robust environments. Although the mobile cyber-physical systems are interacting and acting with the other moveable cyber-physical systems, they also need to handle interactions with human beings. These human beings' interactions can be verbal conversations with the system. However, this communication may struggle with language barriers where a cyber-physical system, for example, only works with one or two natural languages. These interaction limitations may not be obvious nor even known to human beings. In addition, the human being may not be able to handle verbal communication with the cyber-physical systems, therefore hand signs are required to communicate with the systems to provide a trustworthy and secure environment together with several cyber-physical systems.This paper presents Handie, a system handling hand sign interactions with autonomous mobile cyber-physical systems, aka mobile robots. The hand signs are simple, i.e., signs for "ok", "thumbs-up" and "stop" that constitute commands to the robot. Moreover, this non-verbal interaction includes human moods, with facial expressions like happiness, surprise, and fear. Both hand signs and facial expressions are handled by a deep learning recognition component in a humanoid, programmable robot Softbank Robotics' NAO robot v6. Results from testing the system show that the cyber-physical system can recognize hand signs and facial expressions to an acceptable extent to be useful, although the interaction is time-consuming and slow. Also, currently, the communication back to the end users is made verbally and not visually. The next step of Handle's will be to expand the number of hand signs and improve the interaction with the NAO robot.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call