Abstract

In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units). It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.

Highlights

  • The development of the visual surveillance process in dynamic scenes often includes steps for modeling the environment, motion detection, classification of moving objects, tracking and the recognition of actions

  • The majority of visual surveillance systems for scene analysis and surveillance depend on the use of knowledge about the scenes where the objects move in a predefined manner [8,9]

  • Many of the operations performed in the growing neural gas (GNG) algorithm are parallelizable, because they act on all the neurons of the network simultaneously

Read more

Summary

Introduction

The development of the visual surveillance process in dynamic scenes often includes steps for modeling the environment, motion detection, classification of moving objects, tracking and the recognition of actions. Work in the analysis of behaviors has been successful, because of the use of effective and robust techniques for detecting and tracking objects and people This fact has allowed focused interest in higher levels of scene understanding. The intrinsically parallel behavior of artificial neural networks, which were used to represent video events in our previous work [10], permits their implementation on GPUs (graphic processor units), allowing the processing of multiple camera images simultaneously. Multi-core CPU architecture and multiple GPU devices have been combined to manage several streams in parallel and to accelerate each stream using different GPUs. Current GPUs have a large number of processors that can be used for general purpose computing.

Multicore GPGPU Architecture
Growing Neural Gas and GNG-Seq
GNG Algorithm
GNG-Seq
GPU Implementation of GNG
Euclidean Distance Calculation
Parallel Reduction
Other Optimizations
Rate of Adjustments per Second
Multisource Information Processing with GPU
Tracking Multiple Agents in Visual Surveillance Systems
Privacy and Security
Multi-Core CPU and Multi-GPU Implementation
Experiments
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.