Abstract
With the development of social media and smartphones, people share their daily lives via a large number of images, but the convince also raises a problem of privacy leakage. Therefore, effective methods are needed to infer the privacy risk of images and identify images that may disclose privacy. Several works have tried to solve this problem with deep learning models. However, we know little about how the models infer the privacy label of an image, thus it is not easy to understand why the image may disclose privacy. Inspired by recent research on graph neural networks, we introduce prior knowledge to the deep models to make the inference more explainable. We propose the Graph-based neural networks for Image Privacy (GIP) to infer the privacy risk of images. The GIP mainly focuses on objects in an image, and the knowledge graph is extracted from the objects in the dataset without reliance on extra knowledge. Experimental results show that the GIP achieves higher performance compared with the object-based methods and comparable performance even compared with the multi-modal fusion method. The results show that the introduction of the knowledge graph not only makes the deep model more explainable but also makes better use of the information of objects provided by the images. Combing the knowledge graph with deep learning is a promising way to help protect image privacy that is worth exploring.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.