Research on continuous hand gesture recognition is motivated by applications of deaf people to provide effective communication between one another. Numerous techniques are adapted for effectively identifying the hand gesture. Moreover, the collection of data becomes a challenging task in the existing models. In some instances, hand gesture in low-resolution images becomes a complicated task. In this research work, a deep learning-based hand gesture detection model is designed using hand data points to uncover patterns that are used to convey information. Initially, hand gesture images are obtained from public sources. The collected inputs are preprocessed through Contrast-Limited Adaptive Histogram Equalization and filtering to eliminate the unnecessary blurs in the images. The preprocessed images undergo hand segmentation using four methods color space transformation, skin color detection, active contour, and morphological operation. The segmented image is subjected to data point feature extraction, where the Adaptive Weighted Scale-Invariant Feature Transform is utilized for further enhancement. The hand data point-extracted features are given to the recognition of hand gestures with the support of an attention-based hybrid network, where the Attention-based Hybrid 1D Convolutional Neural Network with Recurrent Neural Network recognizes the hand gestures from the hand data points. In the hybridization network, the values are optimized with the help of the investigated Adaptive Dandelion White Shark Optimizer to enhance recognition effectiveness. The test results are validated with the existing hand gesture recognition models using diverse evaluation metrics. The findings of the recommended method show 97% and 99% in terms of accuracy and NPV. Thus, the developed model is validated to outperform significant performance rather than the existing models.