Abstract

Security systems increasingly rely on the use of Automated Video Surveillance (AVS) technology. In particular the use of digital video renders itself to internet and local communications, remote monitoring, and to computer processing. AVS systems can perform many tedious and repetitive tasks currently performed by trained security personnel. AVS technology has already made some significant steps towards automating some basic security functions such as: motion detection, object tracking and event-based video recording. However, there are still many problems associated with just these automated functions, which need to be addressed further. Some examples of these problems are: the high "false alarm rate" and the "loss of track" under total or partial occlusion, when used under a wide range of operational parameters (day, night, sunshine, cloudy, foggy, range, viewing angle, clutter, etc.). Current surveillance systems work well only under a narrow range of operational parameters. Therefore, they need be hardened against a wide range of operational conditions. In this paper, we present a Multi-spectral fusion approach to perform accurate pedestrian segmentation under varying operational parameters. Our fusion method combines the "best" detection results from the visible images and the "best" from the thermal images. Commonly, the motion detection results in the visible images are easily affected by noise and shadows. The objects in the thermal image are relatively stable, but they may be missing some parts of the objects, because they thermally blend with the background. Our method makes use of the "best" object components and de-emphasize the "not best".

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call