Abstract

Widespread deployment of the Internet of Things (IoT) has changed the way that network services are developed, deployed, and operated. Most onboard advanced IoT devices are equipped with visual sensors that form the so-called visual IoT. Typically, the sender would compress images, and then through the communication network, the receiver would decode images, and then analyze the images for applications. However, image compression and semantic inference are generally conducted separately, and thus, current compression algorithms cannot be transplanted for the use of semantic inference directly. A collaborative image compression and classification framework for visual IoT applications is proposed, which combines image compression with semantic inference by using multi-task learning. In particular, the multi-task Generative Adversarial Networks (GANs) are described, which include encoder, quantizer, generator, discriminator, and classifier to conduct simultaneously image compression and classification. The key to the proposed framework is the quantized latent representation used for compression and classification. GANs with perceptual quality can achieve low bitrate compression and reduce the amount of data transmitted. In addition, the design in which two tasks share the same feature can greatly reduce computing resources, which is especially applicable for environments with extremely limited resources. Using extensive experiments, the collaborative compression and classification framework is effective and useful for visual IoT applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call