A crucial component of human-computer interaction is 3D hand posture assessment. The most recent advancements in computer vision have made estimating 3D hand positions simpler by using deep sensors. The main challenge still stems from unrealistic 3D hand poses because the existing models only use the training dataset to learn the kinematic rules, which is ambiguous, and it is a difficult task to estimate realistic 3D hand poses from datasets because they are not free from anatomical errors. The suggested model in this study is trained using a closed-form expression that encodes the biomechanical rules, thus it does not entirely reliant on the pictures from the annotated dataset. This work also used a Single Shot Detection and Correction convolutional neural network (SSDC-CNN) to handle the issues in imposing anatomically correctness from the architecture level. The ResNetPlus is implemented to improve representation capability with enhanced the efficiency of error back-propagation of the network. The datasets of the Yoga Mudras, like HANDS2017, and MSRA have been used to train and test the future model. As observed from the ground truth the previous hand models have many anatomical errors but, the proposed hand model is anatomically error free hand model compared to previous hand models. By considering the ground truth hand pose, the recommended hand model has shown good accuracy when compared to the state-of-art hand models.