Abstract

In this paper, a simple human-machine interface allowing people with severe disabilities to control a motorized wheelchair using mouth and tongue gesture is presented. The development of the proposed system consists of three principal phases: the first phase is mouth detection which performed by using haar cascade to detect the face area and template matching to detect mouth and tongue gestures from the lower face region. The second phase is command extraction; it is carried by determining the mouth and tongue gesture commands according to the detected gesture, the time taken to execute the gestures, and the previous command which is stored in each frame. Finally, the gesture commands are sent to the wheelchair as instruction using the Bluetooth serial port. The hardware used for this project were; laptop with universal serial bus (USB) webcam as a vision-based control unit, Bluetooth module to receive instructions comes from the vision control unit, standard joystick used in case of emergency, joystick emulator which delivers to the control board signals similar to the signals that are usually generated by the standard joystick, and ultrasonic sensors to provide safe navigation. The experimental results showed the success of the proposed control system based on mouth and tongue gestures.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call