Abstract

This study proposes a hybrid visual servoing technique that is optimised to tackle the shortcomings of classical 2D, 3D and hybrid visual servoing approaches. These shortcomings are mostly the convergence issues, image and robot singularities, and unreachable trajectories for the robot. To address these deficiencies, 3D estimation of the visual features was used to control the translations in Z-axis as well as all rotations. To speed up the visual servoing (VS) operation, adaptive gains were used. Damped Least Square (DLS) approach was used to reduce the robot singularities and smooth out the discontinuities. Finally, manipulability was established as a secondary task, and the redundancy of the robot was resolved using the classical projection operator. The proposed approach is compared with the classical 2D, 3D and hybrid visual servoing methods in both simulation and real-world. The approach offers more efficient trajectories for the robot, with shorter camera paths than 2D image-based and classical hybrid VS methods. In comparison with the traditional position-based approach, the proposed method is less likely to lose the object from the camera scene, and it is more robust to the camera calibrations. Moreover, the proposed approach offers greater robot controllability (higher manipulability) than other approaches.

Highlights

  • Vision sensors are widely used to provide contactless knowledge about the environment

  • In order to overcome the drawbacks of classical visual servoing (VS) methods, we proposed Decoupled Hybrid Visual Servoing (DHVS) method, which has a better controllability than other VS methods

  • By using the proposed DHVS, the robot arm performs the task of sorting the battery; (a) the robot will follow the visual features of the object online; the tracked path of each feature is shown in the camera screen. (b) The robot goes down straight to detect the surface by the force feedback. (c,d) The object has been lifted by the vacuum suction gripper, and released the battery in the corresponding basket. (e) The feature errors converged to zero during Visual Servoing. (f) The force value in the Z-axis for detecting the object surface

Read more

Summary

Introduction

Vision sensors are widely used to provide contactless knowledge about the environment. The classical control laws are mostly formulated by minimizing a task function that corresponds to the achievement of a given goal. This primary role only concerns the location of the robot in relation to a goal, while the environment of the robot will not be taken into account. The IBVS method computes the feedback directly from extracted image-space features This method is more resistant to the camera calibration and robot kinematic errors [10]. Due to direct measurement of the camera velocities from the task space errors in the PBVS process, the interaction matrix problems (i.e., local minima and singularity) are avoided, and feasible trajectories for the robot can be generated [14].

Related Works
Contributions of This Study
Methodology
Decoupled Hybrid Visual Servoing
Robot Kinematics with Task Priority
Simulation and Experimental Setup
Design of Setup 1
Case Study 1
Case Study 2
Case Study 3
Design of Setup 2
Method
Sorting Dismantled EV Battery Components by DHVS Using Setup 2
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call