Abstract

Microblogging applications are currently used to disseminate information with concise text and images. Nevertheless, they are also the largest platform for circulating forged images. Forged images are digital photographs that have been modified to deceive or distort the information they communicate. These manipulated images posted on microblogging apps like Twitter often create biased user emotions leading to harmful consequences like religious feuds or riots. Microblogging platforms and fact-checking industry are investing in artificial intelligence solutions to detect these forged images on time. Many image forensic techniques are proposed earlier, but their effectiveness falls short on real-world images shared over microblogging sites. As forged images shared over these platforms are typically altered using multiple manipulation techniques, it is hard for forensic techniques to detect them. This paper proposes a customized convolutional neural network with an attention mechanism to spot fake images shared over microblogging platforms. Deep learning convolutional networks learn the intrinsic feature set of images and can detect the forged images. To handle multiple manipulations in an image, the applied attention mechanism focuses on the most relevant image region to learn the inherent feature sets. The model utilizes High-pass filters from the image processing domain to initialize kernel weights of the neural network. This supports the proposed model to converge faster and achieve better accuracy. The pooling layers are designed to specifically handle images from microblogging sites. The solution is universal and can detect complex tampering scenarios like text-editing, face-swapping, copy-move, splicing and mirroring. Local Interpretable Model-agnostic Explanations (LIME) is utilized to localize the manipulated region in a forged image. LIME also adds interpretability and confidence to the proposed model, a common concern in deep learning models. The model is verified against the publicly available CASIA 2.0 dataset. An accuracy score of 94.7% is achieved, which is better than the previous state-of-art papers in fake image detection. In order to test the model on real-world images published on Twitter, a recent dataset is built from an Indian viewpoint. The model achieves a modest accuracy of 83.2% over the real-world Twitter dataset. The experiment proves that the proposed model can accurately detect the forged images over social platforms. It can be utilized in the fact-checking field to improve manual efforts. It will also support manual fact-checkers in swift decision making.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call