Gestures, as one of the key modes of human–computer interaction, are currently identified and processed mainly through two methods: machine vision imaging and wearable stress/strain sensing.However, the former has high requirements for lighting and background.This paper innovatively proposes a feature fusion algorithm that combines mechanoluminescent sensors for gesture recognition under low-light conditions. Unlike existing methods, this algorithm creates a system that integrates mechanoluminescent sensors for visual gesture recognition. By triggering the sensor’s luminescence through mechanical stress, it provides additional light source information to enhance image quality in low-light environments.For this purpose, this study employed the fluorescent emission properties of ZnS:Cu to fabricate a mechanoluminescence sensor. Its mechanoluminescence response characteristics were measured. Based on these findings, an algorithm for the fusion of gesture information features was developed for low illumination backgrounds. This algorithm decomposes the gesture visual images through YCbCr, extracting the HOG features of the Cr channel (Cr-HOG). Simultaneously, feature fusion is performed using the pooling layer (VGG16-Conv5-pool) following the convolutional layer (Conv5-3) in the VGG16 model. A one-to-one composite SVM classifier is constructed to complete the training and testing of the gesture recognition model. Ultimately, the feasibility of this enhanced gesture recognition method was validated through unmanned vehicle control experiment. Experimental results demonstrated that in environments with an illuminance level below 10 LUX measured by a light meter, the average recognition rate of the wearable mechanoluminescence sensor feature fusion algorithm reached 95.82 %. This is 32.24 % higher than the recognition rate of the single Cr-HOG feature classification and 28.27 % higher than the single VGG16-Conv5-pool feature classification. Compared with feature extraction using ResNet-50 and DenseNet-121 networks, the recognition rate for classifying gestures under low-light conditions is approximately 30 % higher.Compared with other image enhancement recognition methods, the recognition can be increased by up to 30.88 %.