Abstract
As the pace of artificial intelligence (AI) evolution accelerates, the line separating authentic from AI-produced imagery becomes increasingly indistinct. This shift carries profound consequences for sectors such as content verification and digital investigation, underscoring the need for proficient AI-generated image identification systems. Our study utilizes established architectures like AlexNet, Convolutional Neural Networks (CNNs), and VGG16 to explore and evaluate the effectiveness of models based on transfer learning for spotting AI-crafted images. Transfer learning, which applies models pre-trained on large datasets, has proven beneficial in numerous computer vision tasks. In this research, we modify the intricate patterns recognized by AlexNet, CNNs, and VGG16 from extensive datasets to specifically target the detection of AI-generated content. We introduce models that are trained, validated, and tested on a comprehensive dataset that includes both real and AI-generated images. Our experimental findings demonstrate the utility of transfer learning methods in discerning between real and synthetic visuals. By conducting a comparative analysis, we highlight the comparative advantages and limitations of each model in terms of metrics such as precision, recall, accuracy, and the F1-score. Further, we investigate the distinct features identified by each model to elucidate their contribution to accurate classification.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.