For the current wearable devices in the application of cross-diversified user groups, it is common to face the technical difficulties of static sign language recognition accuracy attenuation, weak anti-noise ability, and insufficient system robustness due to the differences in the use of users. This paper proposes a novel static sign language recognition method enhanced by a self-attention mechanism. The key features of sign language gesture classification are highlighted by the weight function, and then the self-attention mechanism is combined to pay more attention to the key features, and the convolutional neural network is used to extract the features and classify them, which realizes the accurate recognition of different types of static sign language under standard gestures and non-standard gestures. Experimental results reveal that the proposed method achieves an average accuracy of 99.52% in the standard static sign language recognition task when tested against the standard 36 static gestures selected within the reference American Sign Language dataset. By imposing random angular bias conditions of ±(0°–9°] and ±(9°–18°], the average recognition rates in this range were 98.63% and 86.33%. These findings indicate that, compared to existing methods, the proposed method not only maintains a high recognition rate for standard static gestures but also exhibits superior noise resistance and robustness, rendering it suitable for static sign language recognition among diverse user populations.