Abstract
This paper proposes an intelligent system that can hold an interview, using a NAO robot as interviewer playing the role of vocational tutor. For that, twenty behaviors within five personality profiles are classified and categorized into NAO. Five basic emotions are considered: anger, boredom, interest, surprise, and joy. Selected behaviors are grouped according to these five different emotions. Common behaviors (e.g., movements or body postures) used by the robot during vocational guidance sessions are based on a theory of personality traits called the “Five-Factor Model”. In this context, a predefined set of questions is asked by the robot—according to a theoretical model called the “Orientation Model”—about the person’s vocational preferences. Therefore, NAO could react as conveniently as possible during the interview, according to the score of the answer given by the person to the question posed and its personality type. Additionally, based on the answers to these questions, a vocational profile is established, and the robot could provide a recommendation about the person’s vocation. The results show how the intelligent selection of behaviors can be successfully achieved through the proposed approach, making the Human–Robot Interaction friendlier.
Highlights
Robots are fascinating machines that are imagined to coexist with humans soon
The results show how the intelligent selection of behaviors can be successfully achieved through the proposed approach, making the Human–Robot Interaction friendlier
The selection of behaviors to be executed by NAO will be based on a theory of personality traits called the “Five-Factor Model”
Summary
Robots are fascinating machines that are imagined to coexist with humans soon. This type of technology is becoming increasingly prevalent in places such as shopping malls, train stations, schools, streets, and museums [1,2], and in fields such as personal assistance [3], health [4], and rescue operations [5], among other domains. Behaviors of the robot, such as gestural patterns, hand positions, and other actions involving a block of memory for execution must be defined. This means that each gesture has to be classified in a library or directory, using the Choregraphe application [8], for later use.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.