Abstract

We propose a 3D linear visual servoing for humanoid robot linear visual servoing is based on the linear approximation between binocular visual space and joint space in humanoid robot. It is very robust to calibration error, especially to camera turing, because it uses neither camera angles nor joint angles to calculate feedback command. Although the method is effective in 3D positioning control, the work space is limited to its front space. In this paper, we expand work space of linear visual servoing to be able to manipulate the target object in wide space. We obtain the linear approximation matrix in other space and express the matrix as the function of neck angle by using neural network. Some experimental results are presented to demonstrate the effectiveness of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.