Abstract

This paper introduces the architecture and the implementation details of an automatic real-time video surveillance system, capable of autonomously detecting anomalous events. Nowadays, video surveillance systems are not flexible in adapting to different operative scenarios (they only work well in known and structured environments) and generally need human assistance to recognize and tag specific visual events. The proposed system can automatically adapt to different scenarios without human intervention (but the placement of the TV sensors), and automatically leans the “typical” targets behavior in each specific operative environment by means of robust self-learning techniques. The learned knowledge is used by the system to give alerts in an automatic way by analyzing the trajectories of visual targets in the controlled scene. Robustness is obtained through an improved version of the Altruistic Vector Quantization algorithm (AVQ). The modified AVQ autonomously establishes the number of trajectory prototypes, and improves the representativeness of the prototypes themselves. Anomalies are detected if visual trajectories deviate from the “typical” learned prototypes. Standard PCs and TV cameras have been used for the actual implementation, that has been tested in many real indoor and outdoor environments. Currently real time performance has been obtained (15 fps for each camera). A preliminary learning period (about 20 minutes to grant a suitable time interval to learn all the “typical” visual trajectories) is necessary. After that period the system gives automatic alerts about events that do not conform to typical behaviors. The field view can be changed (by panning the TV camera, for example), then the system relearns the new scenario without any human intervention (no thresholds or other settings) and accurately detects anomalous events.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call