Abstract

An object detection algorithm using multiple cameras is proposed. The information fusion is based on homography mapping of the foreground information from multiple cameras and for multiple parallel planes. Unlike the most recent algorithms, which transmit and project foreground bitmaps, it approximates each foreground silhouette with a polygon and projects the polygon vertices only. In addition, an alternative approach to estimating the homographies for multiple parallel planes is presented. It is based on the observed pedestrians and does not resort to vanishing-point estimation. The ability of this algorithm to remove cast shadows in moving object detection is also investigated. The results on open video datasets are presented as well.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call