Abstract

Recent demands in sophisticated mobile robots require many semi-autonomous or even autonomous operations, such as decision making, simultaneous localization and mapping, motion tracking and risk assessment, while operating in dynamic environments. Most of these capabilities depend highly on the quality of the input from the cameras mounted on the mobile platforms and require fast processing times and responses. However, quality in robot vision systems is not given only by the quantitative features such as the resolution of the cameras, the frame rate or the sensor gain, but also by the qualitative features such as sequences free of unwanted movement, fast and good image pre-processing algorithms and real-time response. A robot having optimal quantitative features for its vision system cannot achieve the finest performance when the qualitative features are not met. Image stabilization is one of the most important qualitative features for a mobile robot vision system, since it removes the unwanted motion from the frame sequences captured from the cameras. This image sequence enhancement is necessary in order to improve the performance of the subsequently complicated image processing algorithms that will be executed. Many image processing applications require stabilized sequences for input while other present substantially better performance when processing stabilized sequences. Intelligent transportation systems equipped with vision systems use digital image stabilization for substantial reduction of the algorithm computational burden and complexity (Tyan et al. (2004)), (Jin et al. (2000)). Video communication systems with sophisticated compression codecs integrate image stabilization for improved computational and performance efficiency (Amanatiadis & Andreadis (2008)), (Chen et al. (2007)). Furthermore, unwanted motion is removed from medical images via stabilization schemes (Zoroofi et al. (1995)). Motion tracking and video surveillance applications achieve better qualitative results when cooperating with dedicated stabilization systems (Censi et al. (1999)), (Marcenaro et al. (2001)), as shown in Fig. 1. Several robot stabilization system implementations that use visual and inertial information have been reported. An image stabilization system which compensates the walking oscillations of a biped robot is described in (Kurazume & Hirose (2000)). A vision and inertial cooperation for stabilization have been also presented in (Lobo &Dias (2003)) using a fusion model for the vertical reference provided by the inertial sensor and vanishing points from images. A visuo-inertial stabilization for space variant binocular systems has been also developed in (Panerai et al. (2000)), where an inertial device measures angular velocities and linear accelerations, while image geometry facilitates the computation of first-order motion parameters. In 14

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.