Abstract
Visual sensor networks (VSNs) have attracted the interest of researchers worldwide in the last few years, and are expected to play a major role in the evolution of the Internet-of-Things (IoT). When used to perform visual analysis tasks, VSNs may be operated according to two different paradigms. In the traditional compress-then-analyze paradigm, images are acquired, compressed and transmitted for further analysis. Conversely, in the analyze-then-compress paradigm, image features are extracted by visual sensor nodes, encoded and then delivered to a remote destination where analysis is performed. The question this paper aims to answer is What is the best visual analysis paradigm in VSNs? To do this, first we empirically characterize the rate-energy-accuracy performance of the two aforementioned paradigms. Then, we leverage such models to formulate a resource allocation problem for VSNs. The problem optimally allocates the specific paradigm used by each camera node in the network and the related transmission source rate, with the objective of optimizing the accuracy of the visual analysis task and the VSN coverage. Experimental results over several VSNs instances demonstrate that there is no “winning” paradigm, but the best performance are obtained by allowing the coexistence of the two and by properly optimizing their utilization.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.