Abstract

With the technological advancements of the modern era, the easy availability of image editing tools has dramatically minimized the costs, expense, and expertise needed to exploit and perpetuate persuasive visual tampering. With the aid of reputable online platforms such as Facebook, Twitter, and Instagram, manipulated images are distributed worldwide. Users of online platforms may be unaware of the existence and spread of forged images. Such images have a significant impact on society and have the potential to mislead decision-making processes in areas like health care, sports, crime investigation, and so on. In addition, altered images can be used to propagate misleading information which interferes with democratic processes (e.g., elections and government legislation) and crisis situations (e.g., pandemics and natural disasters). Therefore, there is a pressing need for effective methods for the detection and identification of forgeries. Various techniques are currently employed for the identification and detection of these forgeries. Traditional techniques depend on handcrafted or shallow-learning features. In traditional techniques, selecting features from images can be a challenging task, as the researcher has to decide which features are important and which are not. Also, if the number of features to be extracted is quite large, feature extraction using these techniques can become time-consuming and tedious. Deep learning networks have recently shown remarkable performance in extracting complicated statistical characteristics from large input size data, and these techniques efficiently learn underlying hierarchical representations. However, the deep learning networks for handling these forgeries are expensive in terms of the high number of parameters, storage, and computational cost. This research work presents Mask R-CNN with MobileNet, a lightweight model, to detect and identify copy move and image splicing forgeries. We have performed a comparative analysis of the proposed work with ResNet-101 on seven different standard datasets. Our lightweight model outperforms on COVERAGE and MICCF2000 datasets for copy move and on COLUMBIA dataset for image splicing. This research work also provides a forged percentage score for a region in an image.

Highlights

  • Digital images are used in almost every domain, such as public health services, political blogs, social media platforms, judicial inquiries, education systems, armed forces, businesses, and so on

  • With the use of image/photo editing tools like Canva, CorelDRAW, PicMonkey, PaintShop Pro, and many other applications, it has become very easy to manipulate images and videos. Such digitally altered images are a primary source for spreading misleading information, impacting individuals and society. e deliberate manipulation of reality through visual communication with the aim of causing harm, stress, and disruption is a significant risk to society, given the increasing pace at which information is shared through social media platforms such as Twitter, Quora, and Facebook

  • A total of 3000 images are used for training, and 700 images are used for testing purposes. e training images are sized to retain their aspect ratio. e mask size is 28 × 28 pixels, and the size of the image is 512 × 512 pixels. is approach varies from the initial Mask R-convolutional neural network (CNN) [39] approach, where image resize is done in such a way that 800 pixels are regarded as the smallest size and 512 pixels are trimmed to the highest

Read more

Summary

Introduction

Digital images are used in almost every domain, such as public health services, political blogs, social media platforms, judicial inquiries, education systems, armed forces, businesses, and so on. E research work in [37] uses CNN for detecting copy move and image splicing forgeries. E study in [39] uses Mask R-CNN and the Sobel filter for detection and localization of copy move and image splicing forgeries. E research study in [56] uses color illumination, deep convolution neural networks, and semantic segmentation to detect and localize image splicing forgery. Total pixel count e architecture of the proposed system for detection, localization of copy move and image splicing forgery, and calculation of the forged percentage is explained below. RPN (Figure 6) takes the input of any size and generates proposals created by sliding a small network over the output of the last layer of the image characteristic map.

D X D Conv k k
Dataset Annotation
Results
Findings
Result from model

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.