Abstract

We address the requirement of image coding for joint human-machine vision, i.e., the decoded image serves both human observation and machine analysis/understanding. Previously, human vision and machine vision have been extensively studied by image (signal) compression and (image) feature compression, respectively. Recently, for joint human-machine vision, several studies have been devoted to joint compression of images and features, but the correlation between images and features is still unclear. We identify the deep network as a powerful toolkit for generating structural image representations. From the perspective of information theory, the deep features of an image naturally form an entropy decreasing series: a scalable bitstream is achieved by compressing the features backward from a deeper layer to a shallower layer until culminating with the image signal. Moreover, we can obtain learned representations by training the deep network for a given semantic analysis task or multiple tasks and acquire deep features that are related to semantics. With the learned structural representations, we propose SSSIC, a framework to obtain an embedded bitstream that can be either partially decoded for semantic analysis or fully decoded for human vision. We implement an exemplar SSSIC scheme using coarse-to-fine image classification as the driven semantic analysis task. We also extend the scheme for object detection and instance segmentation tasks. The experimental results demonstrate the effectiveness of the proposed SSSIC framework and establish that the exemplar scheme achieves higher compression efficiency than separate compression of images and features.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call