Abstract

Hand gesture segmentation is an essential step to recognize hand gestures for human–robot interaction. However, complex backgrounds and the variety of gesture shapes cause low semantic segmentation accuracy in the existing lightweight methods because of imprecise features and imbalance between branches. To remedy the above problems, we propose a new segmentation structure for hand gestures. Based on the structure, a novel tri-branch lightweight segmentation network (BLSNet), is proposed for gesture segmentation. Corresponding to the structure parts, three branches are employed to achieve local features, boundaries and semantic hand features. In the boundary branch, to extract multiscale features of hand gesture contours, a novel multi-scale depth-wise strip convolution (MDSC) module is proposed based on gesture boundaries for directionality. For hand boundary details, we propose a new boundary weight (BW) module based on boundary attention. To identify hand location, a semantic branch with continuous downsampling is used to address complex backgrounds. We use the Ghost bottleneck as the building block for the entire BLSNet network. To verify the effectiveness of the proposed network, corresponding experiments have been conducted based on OUHANDS and HGR1 datasets, and the experimental results demonstrate that the proposed method is superior to contrast methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.