Abstract

Care robots that provide dietary supplements to the elderly and the disabled should be able to intelligently interact with the users’ intention to eat in real-time and provide the food they want. To this end, based on deep learning, we have developed a user interface that facilitates the selection of food to be eaten according to six user gaze directions. A single-stage object-detection deep learning model has been implemented as a lightweight, single-stage object-detection deep learning model that can be installed on commercial tablets, and the accuracy of this model was 0.9857.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call