Abstract

The ability to recognize and interact with a variety of doorknob designs is an important component on the path to true robot adaptability, allowing robotic systems to effectively interact with a variety of environments and objects The problem addressed in this paper is to develop and implement a method for recognizing the position of a door handle by a robot using data from an RGBD camera. To achieve this goal, we propose a revolutionary approach designed for autonomous robots that allows them to identify and manipulate door handles in different environments using data obtained from RGBD cameras. This was achieved by creating and annotating a complete dataset consisting of 5000 images of door handles from different angles, with the coordinates of the vertices of the bounding rectangles labeled. The architectural basis of the proposed approach is based on MobileNetV2, combined with a special decoder that optimally increases the resolution to 448 pixels. A new activation function specially designed for this neural network is implemented to ensure increased accuracy and efficiency of raw data processing. The most important achievement of this study is the model's ability to work in real-time, processing up to 16 images per second. This research paves the way for new advancements in the fields of robotics and computer vision, making a substantial contribution to the practical deployment of autonomous robots in a myriad of life's spheres.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.