Abstract

The rising demand for utilizing fine-grained data in deep-learning (DL) based intelligent systems presents challenges for the collection and transmission abilities of real-world devices. Deep compressive sensing, which employs deep learning algorithms to compress signals at the sensing stage and reconstruct them with high quality at the receiving stage, provides a state-of-the-art solution for the problem of large-scale fine-grained data. However, recent works have proven that fatal security flaws exist in current deep learning methods and such instability is universal for DL-based image reconstruction methods. In this paper, we assess the security risks introduced by deep compressive sensing in the widely-used computer vision system in the face of adversarial example attacks and poisoning attacks. To implement the security inspection in an unbiased and complete manner, we develop a comprehensive methodology and a set of evaluation metrics to manage all potential combinations of attack methods, datasets (application scenarios), categories of deep compressive sensing models, and image classifiers. The results demonstrate that deep compressive sensing models unknown to adversaries can protect the computer vision system from adversarial example attacks and poisoning attacks, whereas the ones exposed to adversaries can cause the system to become more vulnerable.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call