Abstract

Research work reported in the literature in wearable visual computing has used exclusively static (or non-active) cameras, making the imagery and image measurements dependent on the wearer’s posture and motions. It is assumed that the camera is pointing in a good direction to view relevant parts of the scene at best by virtue of being mounted on the wearer’s head, or at worst wholly by chance. Even when pointing in roughly the correct direction, any visual processing relying on feature correspondence from a passive camera is made more difficult by the large, uncontrolled inter-image movements which occur when the wearer moves, or even breathes. This paper presents a wearable active visual sensor which is able to achieve a level of decoupling of camera movement from the wearer’s posture and motions by a combination of inertial and visual sensor feedback and active control. The issues of sensor placement, robot kinematics and their relation to wearability are discussed. The performance of the prototype robot is evaluated for some essential visual tasks. The paper also discusses potential applications for this kind of wearable robot.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.