Abstract

In this study, three webcams are applied to a five-joint robotic arm system. One camera is applied to recognize commands from control panel, which consist of words and numbers. The letters of command can be obtained after image is transformed from red, green, and blue to hue, saturation, and lightness color space and using the optical character recognition and pattern matching process. The message will then be sent to robot system. Another two cameras are used for catching the left and right images of an object. Calibration procedure of the robot and cameras is performed first. The values from image plane can be compared and transformed to three-dimensional coordinates by the Q matrix. The coordinates are then translated to 4096 precision values in robotic arm system. Movements of the robotic arm are based on fuzzy logic theory that can drive the robotic arm to the relative point and position of a given object. Feedback values of arm movement are applied to correct the position error in real time. The robot system can catch the three-dimensional coordinates of the object and perform smartphone automatic testing operations by the commands from visual recognition system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.