Abstract

Robotic arms can be designed to lift defined payloads precisely and these robotic arms are used in operations like pick and place, welding, painting, precise motion, etc. These robots are programmed using high-level programming, teach pendent, or using graphical programming. If there is any change in the task, then it has to be reprogrammed for the new task, and it’s a tedious job. To address this problem this article focuses on the development of a learning model to teach the robot skills through demonstrations to a robotic arm (xArm) by interfacing with the Oak - D camera. The reach of the xArm robotic arm is 700mm and fitted with a two-jaw gripper. The Oak - D camera has some significant features such as depth sensing, AI capability, and real-time processing. The experimental setup consists of xArm, an Oak-D camera, and a camera mount. The installation of a robotic arm is through xArm python SDK, and the setting up of a camera is through DepthAI python SDK. Initially, color template files for block detection are created using color segmentation technique, later camera matches the color template with the real-time image to get the coordinates with respect to the camera view. In the learn mode the blocks are detected and the coordinates are calculated with respect to the camera and then coordinate transformation is carried out to get the coordinates of the block with respect to xArm. This coordinate is matched with feedback from the xArm and the color of the block is logged to learn file. Through this, a learning file is created. In the demonstration phase first, the coordinates of the block are calculated with respect to the camera view then coordinate transformation is carried out to get the coordinates of the blocks with respect to the xArm, now the controller commands the xArm to sort the blocks using the learn file. It can be concluded that the robot arm sorted the blocks in the same order as taught in the learn mode based on color templates and also the robot arm was able to adapt to the changes made in the environment.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.