Abstract

Human-robot interaction plays a major role in designing assistive robots. A robotic arm as an assistive aid to feed the food to the physically challenged people was developed. In that, to get a target point for the end effector containing food needs to be estimated based on the detected face by the camera available on the robotic manipulator. The user can be situated anywhere between 25 cm to 100 cm away from the robotic arm's end-effector as constrained by the design. To dynamically estimate the depth to a point where a spoonful of food delivery is required, an intuitive technique based on the detected face of the user was evaluated using two 2D cameras: Open MV7, NOIR 8MP, and a 3D camera: Intel 435i. The results were calibrated with the distance obtained by the ultrasonic sensor. The 3D camera was utilized as benchmarking. For the current application, it was found that the 3D camera produced much lesser error compared to 2D cameras. It is also found that NOIR camera performs better than the Open MV7 and could be a good alternative to a 3D camera.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.