Abstract

Multi-camera networks are increasingly becoming pervasive in many monitoring and surveillance applications, and have attracted much attention in distributed systems with collaborative, real-time decision-making capabilities. While in-network data compression brings significant energy savings in camera nodes, signal representation using sparse approximations and overcomplete dictionaries have been shown to outperform traditional compression methods. In this work, an end-to-end and real-time solution is designed and implemented to enable energy-efficient and robust dictionary learning in distributed camera networks by leveraging the spatial correlation of the collected multimedia data. Traditional distributed dictionary learning relies on consensus-building algorithms, which involve communicating with neighboring nodes until convergence is achieved. Existing methods, however, do not exploit spatial correlations in camera networks for improved energy efficiency. In contrast, low-computational-complexity metrics are employed in this work to quantify and exploit the spatial correlation across camera nodes in a wireless network for efficient distributed dictionary learning and in-network image compression. The performance of the proposed approach is validated through extensive simulations on public datasets as well as via real-world experiments on a testbed composed of Raspberry Pi nodes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call