Abstract

More than 110 million people in this world are facing some kind of disability, for which they experience difficulty while eating food. Eating Assistive Robots could meet the needs of the elderly and people with upper limb disabilities or dysfunctions in gaining independence in eating. We are researching making a robot, which can assist the disabled in eating their meals. Our Eating Assistive Robot will detect the face of the disabled and process it for whether his/her mouth is opened or closed. Our robot will put a pre-prepared replaceable spoon of food in his/her mouth iteratively until the food lasts in the food container. The methodology we used for it i.e. firstly there is a live camera feed through which we are detecting human faces, after this, a library of Affectiva calculates how much mouth is open. We have set a certain threshold after which the program starts the stepper motor which brings the pre-filled spoon of food into the mouth of the disabled.

Highlights

  • Eating Assistive Robots should meet the needs of the elderly and people with upper limb disabilities or dysfunctions in gaining independence in eating

  • Dry-electrode is more efficient as compare to wet-electrode for a long time running and smooth working [18].Discriminative optimization (DO) is a method to solve vision problems by updating the new examples and technology.For the 3D vision factor, we evaluated the potential of DO to solve problems under the rigidity of point cloud registration and proved that it is outperformed state of art approaches [19].For understanding facial behavior, open face source is the best technique in computer vision

  • The face is detected from 34 points on a human face

Read more

Summary

Introduction

Eating Assistive Robots should meet the needs of the elderly and people with upper limb disabilities or dysfunctions in gaining independence in eating. We have researched how robots can assist the disabled conveniently in eating. We have developed an algorithm using an Affective library that detects a face and calculates how much percent the mouth is opened, and we have built hardware that performs physical motions. It selects personal designated attention from multiple faces but does not stop or interrupt process information at the same time. Revised Manuscript received on October 11, 2021.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call