With the technological advancements of the modern era, the easy availability of image editing tools has dramatically minimized the costs, expense, and expertise needed to exploit and perpetuate persuasive visual tampering. With the aid of reputable online platforms such as Facebook, Twitter, and Instagram, manipulated images are distributed worldwide. Users of online platforms may be unaware of the existence and spread of forged images. Such images have a significant impact on society and have the potential to mislead decision-making processes in areas like health care, sports, crime investigation, and so on. In addition, altered images can be used to propagate misleading information which interferes with democratic processes (e.g., elections and government legislation) and crisis situations (e.g., pandemics and natural disasters). Therefore, there is a pressing need for effective methods for the detection and identification of forgeries. Various techniques are currently employed for the identification and detection of these forgeries. Traditional techniques depend on handcrafted or shallow-learning features. In traditional techniques, selecting features from images can be a challenging task, as the researcher has to decide which features are important and which are not. Also, if the number of features to be extracted is quite large, feature extraction using these techniques can become time-consuming and tedious. Deep learning networks have recently shown remarkable performance in extracting complicated statistical characteristics from large input size data, and these techniques efficiently learn underlying hierarchical representations. However, the deep learning networks for handling these forgeries are expensive in terms of the high number of parameters, storage, and computational cost. This research work presents Mask R-CNN with MobileNet, a lightweight model, to detect and identify copy move and image splicing forgeries. We have performed a comparative analysis of the proposed work with ResNet-101 on seven different standard datasets. Our lightweight model outperforms on COVERAGE and MICCF2000 datasets for copy move and on COLUMBIA dataset for image splicing. This research work also provides a forged percentage score for a region in an image.