Abstract
Introduction of the assistive robot concept has created numerous ways to restore vital degrees of independence for the elderly and disabled people on their Activities of Daily Living (ADL). The most important aspect of an assistive robot is to understand the user's intentions with minimum number of interactions. Based on these facts, in this study we suggest a novel method to recognize the implicit intention of a human user, by using verbal communication, behavior recognition and motion recognition from the combination of machine learning, computer vision and voice recognition technologies. After recognizing the implicit intension of the user, the system will be able to identify the necessary objects from the domestic area that is going to help the human user and point them out to fulfil his/her intention. By far, this study is expected to simplify the human robot interaction (HRI) while consequently enhancing the adoption of assistive technologies and improving the user's independence in ADL. These findings will certainly help to guide future designs on implicit intention recognition and activity recognition to an accurate intention inference algorithm and intuitive HRI.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.