Abstract

We propose a two-stage hand gesture recognition architecture to support a patient assistance system. Some medical conditions limit mobility, and the patient must rely on medical staff to meet their needs. In such cases, a phone or intercom is not convenient to call for help. A vision-based system operated by changing the orientation of fingers can be used to send specific messages without making arm movements. However, vision-based hand gesture recognition is hindered by occlusion, background clutter, and variations in illumination. Therefore, we developed a two-stage architecture: The first stage produces a saliency map to simplify recognition and the second stage performs classification. A novel combined loss function optimizes the saliency detection model and makes the saliency map more precise. An adaptive kernel-based channel attention layer is used to emphasize salient features. The proposed architecture achieved precise saliency detection on four benchmark datasets and high-accuracy recognition on two. We designed an interface for patients to send specific messages to the medical staff using hand gestures. The interface help patients request assistance and connect with medical staff without leaving the bed or involving a third party.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.