Visual servoing technology is well developed and applied in many automated manufacturing tasks, especially in tools’ pose alignment. To access a full global view of tools, most applications adopt an eye-to-hand configuration or an eye-to-hand/eye-in-hand cooperation configuration in an automated manufacturing environment. Most research papers mainly put efforts into developing control and observation architectures in various scenarios, but few have discussed the importance of the camera’s location in the eye-to-hand configuration. In a manufacturing environment, the quality of camera estimations may vary significantly from one observation location to another, as the combined effects of environmental conditions result in different noise levels of a single image shot in different locations. In this paper, we propose an algorithm for the camera’s moving policy so that it explores the camera workspace and searches for the optimal location where the image’s noise level is minimized. Also, this algorithm ensures the camera ends up at a suboptimal (if the optimal one is unreachable) location among the locations already searched with the limited energy available for moving the camera. Unlike a simple brute-force approach, the algorithm enables the camera to explore space more efficiently by adapting the search policy by learning the environment. With the aid of an image-averaging technique, this algorithm, in the use of a solo camera, achieves observation accuracy in eye-to-hand configurations to a desirable extent without filtering out high-frequency information in the original image. An automated manufacturing application was simulated, and the results show the success of this algorithm’s improvement in observation precision with limited energy.
Read full abstract