Abstract
Compactly representing visual information plays a fundamental role in optimizing the ultimate utility of myriad visual data-centered applications. Numerous approaches have been proposed to efficiently compress the texture and visual features for human visual perception and machine intelligence, respectively; however, much less work has been dedicated to studying the interactions between them. Here, we investigate the integration of feature and texture compression and show that a universal and collaborative visual information representation can be achieved in a hierarchical way. In particular, we study feature and texture compression in a scalable coding framework, where the base layer serves as the deep learning feature and the enhancement layer targets to perfectly reconstruct the texture. Based on the strong generative capability of deep neural networks, the gap between the base feature layer and enhancement layer is further filled with feature-level texture reconstruction, with the goal of further constructing texture representations from features. As such, the residuals between the original and reconstructed texture could be further conveyed in the enhancement layer. To improve the efficiency of the proposed framework, the base layer neural network is trained in a multitask manner such that the learned features enjoy both high-quality reconstruction and high-accuracy analysis. The framework and optimization strategies are further applied in face image compression, and promising coding performance has been achieved in terms of both rate-fidelity and rate-accuracy evaluations.
Submitted Version (Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have