Abstract

Robotic systems are becoming increasingly present in every aspect of our daily private as well as working environments. The more these systems enter into our spheres of action, the more they are intended to support humans in dangerous, dull and dirty tasks, and the more flexibility is expected from them. In order to prepare robotic systems for such duties, they need to be able to perceive their environment. In robotic systems, a perception of the environment can either be achieved through internal sensors, i.e., joint angle and torque sensors, or through external sensors, e.g., vision or tactile sensors. While the use of such internal sensors may seem advantageous because no additional system has to be added, the information that can be obtained from them is rather undifferentiated when we want to gather information about the environment. Thus, this information is in most cases augmented by external sensors. Considering exteroception, the most often used technology to analyze the environment is vision. However, due to general deficiencies of the system such as their dependence on lighting conditions and the necessary line of sight, the information accessible through vision alone [1] is not sufficient to accomplish differentiated manipulation tasks [2].

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call