Abstract

The human–machine interface with head control can be applied in many domains. This technology has the valuable application of helping people who cannot use their hands, enabling them to use a computer or speak. This study combines several image processing and computer vision technologies, a digital camera, and software to develop the following system: image processing technologies are adopted to capture the features of head motion; the recognized head gestures include forward, upward, downward, leftward, rightward, right-upper, right-lower, left-upper, and left-lower; corresponding sound modules are used so that patients can communicate with others through a phonetic system and numeric tables. Innovative skin color recognition technology can obtain head features in images. The barycenter of pixels in the feature area is then quickly calculated, and the offset of the barycenter is observed to judge the direction of head motion. This architecture can substantially reduce the distraction of non-targeted objects and enhance the accuracy of systematic judgment.

Highlights

  • For some people with limb problems, it is impossible to have accurate control over hand movement.Common assistive devices for them include a head-controlled control stick-shaped device, mouth-held stick device, and mouth-controlled blow device

  • When the head is held straight without any deflection, the eyes are almost on the same horizontal line; when the head tilts right or left, the angle between the line linking the canthi of the eyes and the horizontal line changes

  • Unlike the traditional method, where the architecture is designed for different movement control training, the combination of computer vision technology and movement control training enables users to train themselves with head turn games, makes rehabilitation training more attractive, and equips rehabilitation with both training and entertainment

Read more

Summary

Introduction

For some people with limb problems, it is impossible to have accurate control over hand movement. Common assistive devices for them include a head-controlled control stick-shaped device, mouth-held stick device, and mouth-controlled blow device. These assistive tools are not sanitary, comfortable, or convenient because users have to wear or touch some mechanical sensing devices [1,2,3]. Given the inconvenience of the existing assistive systems for the disabled, a system, featuring a combination of computer-based vision technology and movement detection, was developed [4,5,6]. Lalithamani used a single-web camera as the input device to recognize hand gestures Some of these gestures included controlling the mouse cursor, clicking actions, and a few shortcuts for opening specific applications [16].

Conceptual Design
User Testing
Three‐numeric‐code
10. The of the frequently sentences table with head-control interface
Experimental
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.