Abstract

Visual monitoring supported by the Internet of Things (IoT) increasingly relies on analyzing a mass of image data with human–machine interactive mechanisms. However, maintaining the efficiency of such a monitoring system in complex environments with energy or bandwidth constraints poses challenges. While both human perception and machine analysis performance should be satisfied, transmitting extra information to satisfy both should be avoided for efficient resource utilization. To this end, we propose a human–machine interaction-oriented image coding (HMI-IC) framework based on deep learning. In this framework, machines should provide early monitoring messages consisting of analysis results and preview images, and then humans can additionally request high-quality images of objects of interest. Each collected image is compressed into a layered data stream by HMI-IC to fulfill the demands of analysis, preview visualization, and high-quality reconstruction. Adaptive coding transmission can fit different demands in two stages, according to resource constraints. Experimental results show that both accuracy and inference speed on compressed images are improved by our method, with entire coding efficiency comparable to JPEG2000. To validate HMI-IC’s efficiency in practical terms, we provide two use cases (energy constrained and bandwidth constrained) for visual monitoring.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call