Abstract

The automatic feeding robot is designed to solve the problem of self-feeding for the disabled and has a wide range of application prospects in the fields of medical care and social elderly care. However, most of the existing feeding robotic devices are limited with fixed operation and cannot fully address the various needs of the disabled. The systems usually can only complete tasks within preset procedures and lack of the interaction process with the disabled. In this paper, we develop a multimodal based automatic feeding robotic device. It consists of a speech recognition module, a visual perception module, and a interaction control module. The speech recognition module provides an entrance for users to interact with robots. Users can make their own decisions about the order and speed of eating through speech interaction. The visual perception module utilizes the received RGBD images to detect food and faces, and localize them in the real world. The final interaction control module is to control the robot to deliver food to the human mouth. We evaluate each module separately, they all reach the level of daily use. And then we conduct human feeding experiments in a real robotic device. In 30 attempts, the feeding success rate reached 83.3%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call