Abstract

Google has recently introduced the Cloud Vision API for image analysis. According to the demonstration website, the API quickly classifies images into thousands of categories, detects individual objects and faces within images, and finds and reads printed words contained within It can be also used to detect different types of inappropriate content from adult to violent In this paper, we evaluate the robustness of Google Cloud Vision API to input perturbation. In particular, we show that by adding sufficient noise to the image, the API generates completely different outputs for the noisy image, while a human observer would perceive its original content. We show that the attack is consistently successful, by performing extensive experiments on different image types, including natural images, images containing faces and images with texts. For instance, using images from ImageNet dataset, we found that adding an average of 14.25% impulse noise is enough to deceive the API. Our findings indicate the vulnerability of the API in adversarial environments. For example, an adversary can bypass an image filtering system by adding noise to inappropriate images. We then show that when a noise filter is applied on input images, the API generates mostly the same outputs for restored images as for original images. This observation suggests that cloud vision API can readily benefit from noise filtering, without the need for updating image analysis algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call