Abstract

We present a distributed virtual vision simulator capable of simulating large-scale camera networks. Our virtual vision simulator is capable of simulating pedestrian traffic in different 3D environments. Simulated cameras deployed in these virtual environments generate synthetic video feeds that are fed into a vision processing pipeline supporting pedestrian detection and tracking. The visual analysis results are then used for subsequent processing, such as camera control, coordination, and handoff. Our virtual vision simulator is realized as a collection of modules that communicate with each other over the network. Consequently, we can deploy our simulator over a network of computers, allowing us to simulate much larger camera networks and much more complex scenes then is otherwise possible. Specifically, we show that our proposed virtual vision simulator can model a camera network, comprising more than one hundred active pan/tilt/zoom and passive wide field-of-view cameras, deployed in an upper floor of an office tower in downtown Toronto.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call