Abstract

This paper investigates the vulnerability of the deep prior used in deep learning based image restoration. In particular, the image super-resolution, which relies on the strong prior information to regularize the solution space and plays important roles in the image pre-processing for future viewing and analysis, is shown to be vulnerable to the well-designed adversarial examples. We formulate the adversarial example generation process as an optimization problem, and given super-resolution model three different types of attack are designed based on the subsequent tasks: (i) style transfer attack; (ii) classification attack; (iii) caption attack. Another interesting property of our design is that the attack is hidden behind the super-resolution process, such that the utilization of low resolution images is not significantly influenced. We show that the vulnerability to adversarial examples could bring risks to the pre-processing modules such as super-resolution deep neural network, which is also of paramount significance for the security of the whole system. Our results also shed light on the potential security issues of the pre-processing modules, and raise concerns regarding the corresponding countermeasures for adversarial examples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call