Abstract

We present a simplified algorithm for localizing an object using multiple visual images that are obtained from widely used digital imaging devices. We use a parallel projection model which supports both zooming and panning of the imaging devices. Our proposed algorithm is based on a virtual viewable plane for creating a relationship between an object position and a reference coordinate. The reference point is obtained from a rough estimate which may be obtained from the preestimation process. The algorithm minimizes localization error through the iterative process with relatively low-computational complexity. In addition, nonlinearity distortion of the digital image devices is compensated during the iterative process. Finally, the performances of several scenarios are evaluated and analyzed in both indoor and outdoor environments.

Highlights

  • The object localization is one of the key operations in many tracking applications such as surveillance, monitoring and tracking [1,2,3,4,5,6,7,8].In these tracking systems, the accuracy of the object localization is very critical and poses a considerable challenge

  • We propose a simplified algorithm for localizing multiple objects in a multiple-camera environment, where images are obtained from traditional digital imaging devices

  • This paper proposes an accurate and effective object localization algorithm with visual images from unreliable estimate coordinates

Read more

Summary

INTRODUCTION

The object localization is one of the key operations in many tracking applications such as surveillance, monitoring and tracking [1,2,3,4,5,6,7,8].In these tracking systems, the accuracy of the object localization is very critical and poses a considerable challenge. In order to alleviate the effect of the calibration patterns, some methods based on selfcalibration use the point matching from image sequences [29,30,31,32,33,34]. The critical requirements of an effective localization algorithm in tracking applications are the computational simplicity with a simpler model where 3D reconstruction is not necessary as well as the robust adaptation of camera’s movement during tracking (i.e., zooming and panning) without requiring any additional imaging device calibration from the images. We propose a simplified algorithm for localizing multiple objects in a multiple-camera environment, where images are obtained from traditional digital imaging devices.

Basic concept of a parallel projection model
Zooming and panning
The relationship between camera positions and pan factors
The concept of visual localization
B Object
Object localization based on a single camera
Object localization based on multiple cameras
Effect of zooming and lens distortion
Effect of lens shape
Iterative localization for error minimization
Effect of tilting angle
Simulation setup: basic illustration
Application of the algorithms
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call