The development of prosthetic hands has advanced significantly in recent years, aiming to provide more intuitive and functional solutions for individuals with upper limb amputations. This research presents a novel approach to prosthetic hand control by integrating 3D hand gesture recognition with object manipulation and recognition capabilities. Our proposed system utilizes a pre-trained object recognition model, based on transfer learning, to enable the prosthetic hand to perceive and identify objects in its vicinity. The model leverages a vast dataset of objects, enabling the prosthetic hand to recognize a wide array of everyday items, thus enhancing its versatility.In addition, the prosthetic hand incorporates a sophisticated 3D hand gesture recognition system, allowing users to control the hand's movements and actions seamlessly. By recognizing specific gestures, such as grasping, lifting, and releasing, users can intuitively interact with their environment and perform various tasks with ease. This research leverages the synergy between gesture recognition and object recognition, creating a powerful framework for prosthetic hand control. The system's adaptability and versatility make it suitable for a broad range of applications, from assisting with daily tasks to enhancing the quality of life for individuals with upper limb amputations.The results of this study demonstrate the feasibility and effectiveness of combining 3D hand gesture recognition with pre-trained object recognition through transfer learning. This approach opens up new possibilities for enhancing prosthetic hand functionality and usability, ultimately improving the lives of those who rely on these devices for daily living. The proposed model combines the features of YOLO V7 object detection with pre-trained models. The proposed model achieves 99.8% of accuracy compared to the existing models.
Read full abstract