Service robots are rapidly transitioning from concept to reality, making significant strides in development. Similarly, the field of prosthetics is evolving at an impressive pace, with both areas now being highly relevant in the industry. Advancements in these fields are continually pushing the boundaries of what is possible, leading to the increasing creation of individual arm and hand prosthetics, either as standalone units or combined packages. This trend is driven by the rise of advanced collaborative robots that seamlessly integrate with human counterparts in real-world applications. This paper presents an open-source, 3D-printed robotic arm that has been assembled and programmed using two distinct approaches. The first approach involves controlling the hand via teleoperation, utilizing a camera and machine learning-based hand pose estimation. This method details the programming techniques and processes required to capture data from the camera and convert it into hardware signals. The second approach employs kinematic control using the Denavit-Hartenbergmethod to define motion and determine the position of the end effector in 3D space. Additionally, this work discusses the assembly and modifications made to the arm and hand to create a cost-effective and practical solution. Typically, implementing teleoperation requires numerous sensors and cameras to ensure smooth and successful operation. This paper explores methods enabled by artificial intelligence (AI) that reduce the need for extensive sensor arrays and equipment. It investigates how AI-generated data can be translated into tangible hardware applications across various fields. The advancements in computer vision, combined with AI capable of accurately predicting poses, have the potential to revolutionize the way we control and interact with the world around us.
Read full abstract