Abstract

This paper presents a multimodal Human–Machine Interface system that combines an Electrooculography Interface and a Brain–Machine Interface. This multimodal interface has been used to control a robotic arm to perform pick and place tasks in a three dimensional environment. Five volunteers were asked to pick two boxes and place them in different positions. The results prove the feasibility of the system in the performance of pick and place tasks. By using the multimodal interface, all the volunteers (even naive users) were able to successfully move two objects within a satisfactory period of time with the help of the robotic arm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call