Abstract
Industrial and space applications present environments in which it is possible, and in fact desirable to solve robotic problems using a model-based approach. From a sensory standpoint, the reasons for employing knowledge about objects to be manipulated are twofold. First, such knowledge permits high-level expectation driven reasoning as opposed to low-level data driven searches for primitive features. This is advantageous since purely data driven feature extraction is typically undirected and the search space is unconstrained. The second reason is that expectation driven reasoning can exploit knowledge derived from features that have already been found, thus expediting subsequent searches. Conversely, however, there is a rigid requirement to specify the geometry and kinematics for object models about which reasoning is to occur. This paper describes a model-based computer vision system that has been coupled with a robot arm for the purpose of accurately reasoning about entities on a reconfigurable task panel. The final goal of the integrated system is to be able to manipulate substructures such as hinged doors and laterally translatable drawers using computer vision as the primary sensory input. This overall objective is accomplished by first locating the camera at a position where it can view the entire panel such that an initial worksite registration can be computed. Next, an approximation for each substructure's spatial configuration is determined by employing kinematic and geometric knowledge using a generate and test paradigm. This step is followed by repositioning the robotically mounted camera to a location and orientation that is preferable for further, more accurate spatial inferences. The camera is automatically recalibrated at the new location and a final move is made to grasp and open or close the specified substructure. The primary advantage of the approach is that final moves can be achieved within a few millimeters of ideal target locations, even when target objects are initially viewed from locations which initially produce poor pose estimation results, since object pose estimations are successively refined as the result of information obtained at new viewpoints. In addition to describing the mechanisms and algorthms that are utilized in the research, a comparison of the accuracy of the results obtained from both non-repositionable and repositionable sensor based spatial reasoning systems is presented.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.