Abstract

In modern security systems such as CCTV-based surveillance applications, real-time deep-learning based computer vision algorithms are actively utilized for always-on automated execution. The real-time computer vision system for surveillance applications is highly computation-intensive and exhausts computation resources when it performed on the device with a limited amount of resources. Based on the nature of Internet-of-Things networks, the device is connected to main computing platforms with offloading techniques. In addition, the real-time computer vision system such as the CCTV system with image recognition functionality performs better when arrival images are sampled at a higher rate because it minimizes missing video frame feeds. However, performing it at overwhelmingly high rates exposes the system to the risk of a queue overflow that hampers the reliability of the system. In order to deal with this issue, this paper proposes a novel queue-aware dynamic sampling rate adaptation algorithm that optimizes the sampling rates to maximize the computer vision performance(i.e., recognition ratio) while avoiding queue overflow under the concept of Lyapunov optimization framework. Through extensive system simulations, the proposed approaches are shown to provide remarkable gains.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.