Abstract
Multi-level human motion tracking and analysis is still an open question in person surveillance, especially with constrained computational and communication resources. In this paper, we propose a sensing paradigm which could address this challenge efficiently and effectively. The proposed paradigm mainly includes two components. First, we design a compressive infrared sensing model, which can sample and encode multi-level human motion into low-level sensor data directly, without the mediate process of scene recovery. Second, we employ lightweight data processing algorithms to detect and segment human motion at different levels, and decode the location information adaptively. We used self-developed pyroelectric infrared (PIR) sensor nodes to construct a wireless distributed network, and conducted experiments in real office environment. The experimental results showed that the proposed paradigm could track human motion at two levels robustly, and the computational and communication burden is low (5×1 sensor data stream at 5 Hz for processing). Our paradigm bridges the gap between the low-level sensor data and the high-level analysis for large-scale automated surveillance, and could serve as useful guidance for system design if needed.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.