• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Sign In
Paper
Search Paper
Cancel
Pricing Sign In
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
Discovery Logo menuClose menu
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link

Related Topics

  • Cartoon Images
  • Cartoon Images
  • Image Retrieval
  • Image Retrieval
  • Query Image
  • Query Image

Articles published on Image content

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
5179 Search results
Sort by
Recency
  • New
  • Research Article
  • 10.22630/mgv.2025.34.4.3
Adaptation art image style transfer by integrating CSDA-FD algorithm and OSDA-DS algorithm
  • Dec 4, 2025
  • Machine Graphics & Vision
  • Peng Wang

Traditional domain adaptation learning methods have a strong dependence on data labels. The transfer process can easily lead to a decrease in training set performance, affecting the effectiveness of transfer learning. Therefore, this study proposes a domain adaptation model that combines feature disentangling and disentangling subspaces. The model separates the content and style features of images through disentangling, effectively improving the quality of image transfer. From the results, the proposed feature disentangling algorithm achieved pixel accuracy of over 84% for semantic segmentation of 14 categories, including roads, sidewalks, and buildings, with an average pixel accuracy of 85.2%. On the ImageNet, the precision, recall, F₁ score, and overall accuracy of the research algorithm were 0.942, 0.898, 0.854, and 0.841, respectively. Compared with the One-Class Support Vector Machine, the precision, recall, F₁, and overall accuracy were improved by 8.4%, 10.3%, 27.8%, and 10.9%, respectively. The proposed model can accurately recognize and classify images, providing effective technical support for image transfer.

  • New
  • Research Article
  • 10.1088/2631-8695/ae2833
Copy move forgery detection with Sobel filter using convolutional neural network
  • Dec 4, 2025
  • Engineering Research Express
  • Maheswary A/P Gnanasegaran + 2 more

Abstract This study aimed to improve copy-move forgery detection by combining traditional forensic techniques with deep learning methods. A hybrid detection framework is proposed that integrates Error Level Analysis (ELA), Haar wavelet decomposition, and Sobel edge detection with a VGG16 Convolutional Neural Network (CNN). The model was trained and evaluated using the CASIA v2.0 tampered image dataset from the Chinese Academy of Sciences. Results show that the ELA CNN Sobel model achieved a training accuracy of 99.96% and a validation accuracy of 91.21%, outperforming the Haar Wavelet Sobel model, which recorded 81.30% training accuracy and 65.90% validation accuracy. The use of Sobel filtering enhanced edge localization, allowing the CNN to detect manipulation boundaries more accurately. These findings demonstrate that combining ELA with Sobel filtering improves CNN performance and generalization, especially in compressed or low-quality images. This hybrid preprocessing approach offers practical value for image forensics, content authentication, and misinformation prevention. Future work should focus on optimizing preprocessing time and increasing dataset diversity to enhance real-time detection and robustness.

  • New
  • Research Article
  • 10.1186/s12909-025-08367-1
Perspectives of physical therapists in Saudi Arabia on radiological interpretation: attitudes, engagement, and educational needs.
  • Dec 2, 2025
  • BMC medical education
  • Samia A Alamrani + 6 more

Radiological imaging is essential in clinical practice to support diagnosis and treatment planning. As Physical Therapists (PTs) increasingly collaborate within multidisciplinary teams, their ability to interpret radiographs has become more relevant. In Saudi Arabia, limited data exist concerning the involvement of PTs in radiological interpretation. Therefore, this study aimed to explore PTs' engagement with radiological information, assess their attitudes, and examine the factors influencing their involvement in and interest in imaging education. This cross-sectional descriptive study employed a self-structured questionnaire to gather data on demographics, professional characteristics, practice patterns, learning sources, perceived barriers, and attitudes. Chi-square tests were used to assess associations, and binary logistic regression was used to identify predictors of interest in further education. Among the 241 PTs surveyed, 46.1% reported frequent involvement in radiological interpretation, and 83.0% believed it should be part of their professional role. Academic education was the main learning source, while 40.0% identified insufficient training as a key barrier. Engagement levels and attitudes were significantly associated with qualification, experience, workplace setting, and specialization. Notably, PTs who rarely contributed were four times more likely to express interest in further education (OR = 4.0, 95% CI: 1.5-10.4, ationalificp = 0.007). Many PTs in Saudi Arabia reported engaging in radiological interpretation, though the extent and accuracy of these contributions remain self-reported rather than objectively confirmed. Their involvement was influenced by education, clinical experience, and workplace setting. The findings highlight the need to integrate imaging content into national curricula and continuing professional development programs. Enhancing these competencies has the potential to strengthen collaborative care and may contribute to improved clinical decision-making and healthcare outcomes. Not applicable.

  • New
  • Research Article
  • 10.1016/j.cmpb.2025.109042
PM2: A new prompting multi-modal model paradigm for few-shot medical image classification.
  • Dec 1, 2025
  • Computer methods and programs in biomedicine
  • Zhenwei Wang + 5 more

PM2: A new prompting multi-modal model paradigm for few-shot medical image classification.

  • Research Article
  • 10.3390/make7040140
Towards Explainable Machine Learning from Remote Sensing to Medical Images—Merging Medical and Environmental Data into Public Health Knowledge Maps
  • Nov 6, 2025
  • Machine Learning and Knowledge Extraction
  • Liviu Bilteanu + 6 more

Both remote sensing and medical fields benefited a lot from the machine learning methods, originally developed for computer vision and multimedia. We investigate the applicability of the same data mining-based machine learning (ML) techniques for exploring the structure of both Earth observation (EO) and medical image data. Support Vector Machine (SVM) is an explainable active learning tool to discover the semantic relations between the EO image content classes, extending this technique further to medical images of various types. The EO image dataset was acquired by multispectral and radar sensors (WorldView-2, Sentinel-2, TerraSAR-X, Sentinel-1, RADARSAT-2, and Gaofen-3) from four different urban areas. In addition, medical images were acquired by camera, microscope, and computed tomography (CT). The methodology has been tested by several experts, and the semantic classification results were checked by either comparing them with reference data or through the feedback given by these experts in the field. The accuracy of the results amounts to 95% for the satellite images and 85% for the medical images. This study opens the pathway to correlate the information extracted from the EO images (e.g., quality-of-life-related environmental data) with that extracted from medical images (e.g., medical imaging disease phenotypes) to obtain geographically refined results in epidemiology.

  • Research Article
  • 10.3390/sym17111864
GAN Ownership Verification via Model Watermarking: Protecting Image Generators from Surrogate Model Attacks
  • Nov 4, 2025
  • Symmetry
  • Shuai Cao + 1 more

With the widespread application of generative adversarial networks (GANs) in image generation and content creation, their model architectures and training outcomes have become valuable intellectual property assets. However, in practical deployment, image generative models are vulnerable to surrogate model attacks, posing significant risks to copyright ownership and commercial interests. To address this issue, this paper proposes a novel copyright protection scheme for image generative models with a symmetric embedding–retrieval watermark architecture in GANs focused on defending against surrogate model attacks. Unlike traditional model encryption or architectural constraint strategies, the proposed approach integrates a watermark embedding module directly into the image generative network, enabling generated images to implicitly carry copyright identifiers. Leveraging a symmetric design between the embedding and retrieval processes, the system ensures that, under surrogate model attacks, the original model’s identity can be reliably verified by extracting the embedded watermark from the generated outputs. The implementation comprises three key modules—feature extraction, watermark embedding, and watermark retrieval—forming an end-to-end, balanced embedding–retrieval pipeline. Experimental results demonstrate that this approach achieves efficient and stable watermark embedding and retrieval without compromising generation quality, exhibiting high robustness, traceability, and practical applicability, thereby offering a viable and symmetric solution for intellectual property protection in image generative networks.

  • Research Article
  • 10.1109/tmi.2025.3580561
Bridging the Semantic Gap in Medical Visual Question Answering With Prompt Learning.
  • Nov 1, 2025
  • IEEE transactions on medical imaging
  • Zilin Lu + 4 more

Medical Visual Question Answering (Med-VQA) aims to answer questions regarding the content of medical images, crucial for enhancing diagnostics and education in healthcare. However, progress in this field is hindered by data scarcity due to the resource-intensive nature of medical data annotation. While existing Med-VQA approaches often rely on pre-training to mitigate this issue, bridging the semantic gap between pre-trained models and specific tasks remains a significant challenge. This paper presents the Dynamic Semantic-Adaptive Prompting (DSAP) framework, leveraging prompt learning to enhance model performance in Med-VQA. To this end, we introduce two prompting strategies: Semantic Alignment Prompting (SAP) and Dynamic Question-Aware Prompting (DQAP). SAP prompts multi-modal inputs during fine-tuning, reducing the semantic gap by aligning model outputs with domain-specific contexts. Simultaneously, DQAP enhances answer selection by leveraging grammatical relationships between questions and answers, thereby improving accuracy and relevance. The DSAP framework was pre-trained on three datasets-ROCO, MedICaT, and MIMIC-CXR-and comprehensively evaluated against 15 existing Med-VQA models on three public datasets: VQA-RAD, SLAKE, and PathVQA. Our results demonstrate a substantial performance improvement, with DSAP achieving a 1.9% enhancement in average results across benchmarks. These findings underscore DSAP's effectiveness in addressing critical challenges in Med-VQA and suggest promising avenues for future developments in medical AI.

  • Research Article
  • 10.1111/1556-4029.70161
Unmasking anti-forensic techniques: A DCNN-driven approach to uncover contrast enhancement and median filtering detection.
  • Nov 1, 2025
  • Journal of forensic sciences
  • Neeti Taneja + 2 more

A forensic analyst must utilize a variety of artifacts in order to create a potent forensic method. By eliminating these artifacts, anti-forensic approaches seek to elude forensic detectors. The field of digital image forensics has many difficulties due to the growing sophistication of anti-forensic tactics. Two popular techniques for modifying image characteristics are contrast enhancement and median filtering, which are frequently used to hide signs of manipulation. Therefore, a solution for identifying anti-forensic techniques is urgently needed. This paper presents a multi-class forensic Deep Convolutional Neural Network (DCNN) architecture that combines domain-specific feature streams and residual-domain pre-processing. This pre-processing is designed to reduce image content and highlight manipulation artifacts in order to detect and classify various kinds of image alterations. The DCNN is made to recognize and extract minute manipulation artifacts that are hidden in pixel-level patterns and invisible to the naked eye. The Boss Base dataset is used for training and testing. Experimental assessments show that the proposed model can recognize images that have been exposed to median filtering and contrast enhancement anti-forensics with a good accuracy of 96.42%, even with different levels of manipulation intensity. The proposed model integrates intelligent pre-processing with domain-tailored streams, which makes it robust against compression and is capable of distinguishing between a wide range of complex manipulation types. This strategy fulfills the increasing demand for automated and precise detection techniques in the fight against anti-forensic activities by offering a reliable tool to digital forensic investigators.

  • Research Article
  • 10.55041/ijsrem53367
Image Auto-Compression using Sharp and AWS Lambda
  • Oct 31, 2025
  • INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
  • Ms Farhina S Sayyad + 1 more

Abstract— In today’s digital era, users frequently upload high-resolution images, which often lead to system performance issues, slower load times, and excessive cloud storage usage. Manual image optimization remains inefficient and prone to human error for both developers and end-users. This paper introduces an automated, serverless image optimization pipeline utilizing AWS Lambda in combination with the Sharp.js library. When an image is uploaded to Amazon S3, it activates a Lambda function that automatically compresses and optimizes the image into a web-friendly format without noticeable quality degradation. This approach enables real-time image compression without the need for backend server management, thereby minimizing storage requirements, improving application speed, and enhancing user experiences across various platforms. In the modern internet-driven landscape, images represent a significant portion of the data transmitted across both web and mobile applications. Studies indicate that over 65% of webpage data weight is attributed to images, underlining the necessity of efficient image management. While high-resolution visuals are crucial for superior user engagement, they increase bandwidth consumption, load time, and cloud storage expenses. Traditional optimization approaches demand manual pre-processing or rely on specialized backend servers, which introduces inefficiency, cost, and maintenance challenges. This study proposes a completely automated, serverless pipeline for image compression and optimization using AWS Lambda and Sharp.js. Leveraging AWS Lambda’s eventdriven framework, the system triggers compression operations whenever new images are uploaded to S3. Sharp.js, built upon the efficient libvips engine, performs resizing and compression operations while maintaining visual quality. The integration of serverless computing with this high-performance library ensures real-time automation, scalability, and cost efficiency. Furthermore, this research introduces two innovative enhancements: 1. A Deep Reinforcement Learning (DRL)-based predictive resource provisioning mechanism that mitigates cold start latency. 2. A Semantic-Aware Adaptive Compression (S-ADC) algorithm that intelligently modifies compression settings based on image content and semantic complexity. Experimental evaluations conducted across formats such as JPEG, PNG, WebP, and AVIF reveal considerable reductions in file size while preserving visual fidelity. The proposed system not only enhances accessibility for users with limited bandwidth but also reduces cloud expenses and supports sustainable computing practices. By merging serverless infrastructure with adaptive intelligence, this work delivers a scalable, cost-effective, and eco- friendly solution for image optimization applicable to real-world web and mobile platforms. Keywords—Cloud Computing, Serverless Architecture, AWS Lambda, Sharp.js, Image Compression, Reinforcement Learning, Adaptive Compression, Media Optimization, Cloud Efficiency.

  • Research Article
  • 10.54097/xhtpna28
Key Technology Breakthrough and Application Expansion of Image Encryption Authentication Based on SA-QE Collaboration
  • Oct 31, 2025
  • Journal of Computer Science and Artificial Intelligence
  • Qing Gan + 4 more

With the rapid development of digital multimedia technology, images, as an important carrier of information dissemination, have been widely applied in fields such as healthcare, security, commerce, and social networking. However, images are highly susceptible to tampering, duplication, and illegal use during transmission and storage, posing severe challenges to their authenticity and integrity. Traditional image authentication techniques exhibit significant deficiencies in terms of security, robustness, and invisibility, making them difficult to meet the increasing security demands. This paper proposes a novel image authentication method that integrates Sparse Approximation (SA) and Quantum Encryption (QE), aiming to enhance the security and anti-attack capabilities of digital images. The method first performs subsampling and sparsification on the watermark image, extracts multi-scale features of the image using Discrete Wavelet Transform (DWT), and generates a highly random measurement matrix through quantum logic mapping to achieve encryption and exchange of sparse coefficients. Subsequently, Singular Value Decomposition (SVD) is employed to embed the encrypted watermark information into the low-frequency components of the host image, ensuring the invisibility and robustness of the watermark. Experimental results demonstrate that the proposed method exhibits excellent performance in resisting noise, geometric transformations, and enhancement attacks. When the correct key is used, the watermark can be accurately recovered, while the use of an incorrect key results in complete distortion of the watermark, effectively preventing illegal extraction. The research presented in this paper provides an efficient and secure technical path for digital image copyright protection and content authentication.

  • Research Article
  • 10.1109/tip.2025.3625764
Instruction-Driven Multi-Weather Image Translation Based on a Large-Scale Image Editing Model.
  • Oct 31, 2025
  • IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
  • Yunjian Feng + 2 more

Weather image translation technologies aim to convert sunny images into various weather scenes, addressing the challenge of the costly acquisition of highly-demanded diverse weather samples. However, existing weather translation methods based on generative adversarial networks (GANs) have limited generalization capability, resulting in translated images that lack authenticity and diversity. In contrast, the emerging image generation technologies based on diffusion models have greatly surpassed GAN-based ones in performance, thus becoming the dominant paradigm in various visual tasks. This work pioneers the application of diffusion models to weather translation and presents a novel Instruction-driven Multi-Weather Translation (InstructWT) method. InstructWT is built on the large image editing model, InstructPix2Pix, and leverages the latter's zero-shot generalization capacities. We develop a user-friendly translation instruction set through prompt engineering and introduce a weather intensity factor for precise control of weather effects, thereby well enhancing the authenticity and diversity of weather images translated. A weather correlation-based blended editing technique is employed to maintain the layout and structure of the original image content. Additionally, a physical rendering approach of rain and snow is incorporated to further improve the translations' realism. The results of comparative experiments on a public dataset, Cityscapes, demonstrate that InstructWT outperforms existing methods in terms of authenticity and fidelity. Specifically, InstructWT achieves Contrastive Language-Image Pre-Training (CLIP) image embedding cosine similarity and directional CLIP similarity scores of 0.8302 and 0.1598, respectively. Furthermore, several semantic segmentation algorithms fine-tuned using the multi-weather scene dataset augmented by InsturctWT show significant improvement in the segmentation effect on all complex weather scenarios.

  • Research Article
  • 10.1145/3773281
Enhancing Embedding Diversity and Robustness for Image-Text Retrieval in Remote Sensing
  • Oct 28, 2025
  • ACM Transactions on Multimedia Computing, Communications, and Applications
  • Yuchen Sha + 7 more

Remote Sensing Image-Text Retrieval (RSITR) aims to retrieve semantically matched images or textual descriptions. RSITR faces the significant challenge of achieving accurate cross-modal alignment. Existing methods commonly assume that all image-text pairs are correctly aligned, where a single textual description corresponds to one specific image. However, in real-world RSITR scenarios, image-text pairs inevitably exist incorrectly-correlated relationships, where texts contain factual errors in describing image content. Moreover, these datasets also exhibit weakly-correlated relationships, where a single image is correlated with multiple semantically similar texts, leading to the difficulty of cross-modal alignment. In this paper, we propose the Enhancing Embedding Diversity and Robustness (EEDR), which enhances cross-modal alignment by learning both instance-level and feature-level diversity to improve visual-textual correspondence robustness. Firstly, to alleviate the impact of incorrectly-correlated relationships, we propose an Incorrectly-Correlated Feature Rectification (ICFR) module that incorporates large language model (LLM) guidance to refine instance-level visual-textual correspondences. This module introduces a dynamic margin-guided mechanism that quantifies feature discrepancies to adaptively weight the original and LLM-generated auxiliary descriptions, thereby suppressing the interference of incorrect textual descriptions. Secondly, a Weakly-Correlated Feature Decoupling (WCFD) module is proposed to learn diverse discriminative features across modalities. WCFD module learns parameterized normal distributions to generate modality-specific decoupled features, enabling the model to accurately establish visual-textual correspondences under the condition of multiple semantically similar texts. We conduct extensive experiments on benchmark datasets, demonstrating that our approach outperforms state-of-the-art methods. Our code is available at https://github.com/ycharlene/EEDR.

  • Research Article
  • 10.3390/s25216559
Improved Quadtree-Based Selection of Single Images for 3D Generation
  • Oct 24, 2025
  • Sensors
  • Wanyun Li + 5 more

With the rapid development of large generative models for 3D content, image-to-3D and text-to-3D generation has become a major focus in computer vision and graphics. Single-view 3D reconstruction, in particular, offers a convenient and practical solution. However, the way to automatically choose the best image from a large collection to optimize reconstruction quality and efficiency is very important. This paper proposes a novel image selection framework based on multi-feature fusion quadtree structure. Here, we introduce a new image selection method based on a multi-feature quadtree structure. Our approach integrates various visual and semantic features and uses a hierarchical quadtree to efficiently evaluate image content. This allows us to identify the most informative and reconstruction-friendly image from large datasets. We then use Tencent’s Hunyuan 3D model to verify that the selected image improves reconstruction performance. Experimental results show that our method outperforms existing approaches across key metrics. Baseline methods achieved average scores of 6.357 in Accuracy, 6.967 in Completeness, and 6.662 Overall. Our method reduced these to 4.238, 5.166, and 4.702, corresponding to an average error reduction of 29.5%. These results confirm that our approach reduces reconstruction errors, improves geometric consistency, and yields more visually plausible 3D models.

  • Research Article
  • 10.3390/electronics14204116
Enhancing JPEG XL’s Weighted Average Predictor: Genetic Algorithm Optimization of Expanded Sub-Predictor Ensemble
  • Oct 21, 2025
  • Electronics
  • Xavier Hill Roy + 1 more

Lossless image compression relies heavily on prediction algorithms to reduce spatial redundancy before entropy coding. The JPEG XL standard employs a weighted average predictor that combines four sub-predictors with adaptive weighting; however, it uses fixed initial scaling factors regardless of the image content. This study introduces WOP8 (weighted optimization predictor for 8 sub-predictors), which extends the predictor diversity and optimizes initial weights using a genetic algorithm. Four additional predictors were incorporated—adaptive MED (JPEG-LS), enhanced adaptive median, Paeth (PNG), and GAP-based (CALIC)—forming an eight-predictor ensemble. A genetic algorithm with a population of 30 and 24 generations optimized the weight configurations by minimizing the compressed file size of the training data. Experiments were conducted on the Kodak and Tecnick datasets to evaluate performance and generalizability. The Kodak color dataset showed notable gains: with the weighted average predictor in isolation, WOP8 achieved a 0.24 BPP reduction (2.7% improvement) at high effort levels. Under standard JPEG XL operation mode, improvements were minor but consistent. These results confirm the value of targeted predictor optimization and demonstrate that genetic algorithms can effectively discover dataset-specific weighting patterns, offering a foundation for future component-level enhancements in JPEG XL.

  • Research Article
  • 10.3390/math13203329
Hybrid Convolutional Transformer with Dynamic Prompting for Adaptive Image Restoration
  • Oct 19, 2025
  • Mathematics
  • Jinmei Zhang + 5 more

High-quality image restoration (IR) is a fundamental task in computer vision, aiming to recover a clear image from its degraded version. Prevailing methods typically employ a static inference pipeline, neglecting the spatial variability of image content and degradation, which makes it difficult for them to adaptively handle complex and diverse restoration scenarios. To address this issue, we propose a novel adaptive image restoration framework named Hybrid Convolutional Transformer with Dynamic Prompting (HCTDP). Our approach introduces two key architectural innovations: a Spatially Aware Dynamic Prompt Head Attention (SADPHA) module, which performs fine-grained local restoration by generating spatially variant prompts through real-time analysis of image content and a Gated Skip-Connection (GSC) module that refines multi-scale feature flow using efficient channel attention. To guide the network in generating more visually plausible results, the framework is optimized with a hybrid objective function that combines a pixel-wise L1 loss and a feature-level perceptual loss. Extensive experiments on multiple public benchmarks, including image deraining, dehazing, and denoising, demonstrate that our proposed HCTDP exhibits superior performance in both quantitative and qualitative evaluations, validating the effectiveness of the adaptive restoration framework while utilizing fewer parameters than key competitors.

  • Research Article
  • 10.1145/3744656
Back-in-Time Diffusion: Unsupervised Detection of Medical Deepfakes
  • Oct 17, 2025
  • ACM Transactions on Intelligent Systems and Technology
  • Fred M Grabovski + 3 more

Recent progress in generative models has made it easier for a wide audience to edit and create image content, raising concerns about the proliferation of deepfakes, especially in healthcare. Despite the availability of numerous techniques for detecting manipulated images captured by conventional cameras, their applicability to medical images is limited. This limitation stems from the distinctive forensic characteristics of medical images, a result of their imaging process. In this work, we propose a novel anomaly detector for medical imagery based on diffusion models. Normally, diffusion models are used to generate images. However, we show how a similar process can be used to detect synthetic content by making a model reverse the diffusion on a suspected image. We evaluate our method on the task of detecting fake tumors injected and removed from CT and MRI scans. Our method significantly outperforms other state-of-the-art unsupervised detectors with an increased AUC of 0.9 from 0.79 for injection and of 0.96 from 0.91 for removal on average. We also explore our hypothesis using AI explainability tools and publish both our code and new medical deepfake datasets to encourage further research into this domain.

  • Research Article
  • 10.1080/17512786.2025.2572976
Through the Graphic Lens: The Effects of Graphic Images on News Consumers’ Reactions
  • Oct 15, 2025
  • Journalism Practice
  • Gabriela Ruhl Ibarra + 3 more

ABSTRACT Journalism professionals and media scholars frequently debate whether audiences should be exposed to graphic or sanitized imagery in news coverage. Understanding how such visuals affect viewers is key to resolving this issue. In this paper, we investigate the impact of graphic imagery on news consumers’ reactions, and examine the moderating effects of prior exposure to violence, and the proximity of the event. In Study 1, Mexican and Dutch participants (N = 128) viewed either a graphic or sanitized news video about the 2019–2020 Chilean protests. In Study 2, Mexican participants (N = 375) read a news article about violence against migrants, varying in image content and event proximity. Results indicate that graphic imagery increased negative emotions and willingness to discuss and share the news but did not influence information seeking or social involvement. Prior exposure to violence and proximity did not moderate these effects. However, specific negative emotions mediated the relationship between graphicness and some outcomes. These findings provide journalists with an evidence-based foundation for making informed decisions about the use of graphic visuals. By understanding the impact of such imagery, they can more effectively evaluate which visuals to include in their reporting without overwhelming or alienating audiences.

  • Research Article
  • 10.1038/s41598-025-19733-w
An innovative multi-head attention mechanism-driven recurrent neural network model with feature representation fusion for enhanced image captioning to assist individuals with visual impairments
  • Oct 14, 2025
  • Scientific Reports
  • Mashael M Asiri + 3 more

Developments in image captioning technologies played a crucial role in improving the quality of life for individuals with visual impairments, advancing better social inclusivity. Image captioning is the task of representing the visual content of the images in natural language, applying a language method and a visual understanding system able to generate significant and syntactically correct sentences. Image captioning is a field of research of vast significance, targeting the creation of natural language representations for visual content in static images. Automatically representing the image content is a significant challenge in artificial intelligence (AI). Therefore, the emergence of deep learning (DL) and the most recent vision-language pre-training methods have significantly advanced the domain, resulting in more advanced techniques and enhanced performance. DL-based methods can process the difficulties and nuances of image captioning. This paper proposes an Innovative Multi-Head Attention Mechanism-Driven Recurrent Neural Network with Feature Representation Fusion for Image Captioning Performance (MARNN-FRFICP) approach to assist individuals with visual impairments. The MARNN-FRFICP approach aims to enhance image captioning by employing an effective method focused on improving accessibility for individuals with visual impairments. Initially, the Gaussian filtering (GF) technique is utilized in the image pre-processing stage to enhance image quality by removing the noise. In addition, the fusion of advanced DL models, namely InceptionResNetV2, convolutional vision transformer (CvT), and DenseNetl69, is employed to enhance the effectiveness of the feature extraction process. Moreover, the hybrid of multi-head attention mechanism-based bi-directional long short-term memory and gated recurrent unit (MH-BLG) technique is used for classification. Finally, the Lyrebird optimization algorithm (LOA) technique is employed for tuning. The efficiency of the MARNN-FRFICP methodology is examined under the Flickr8k, Flickr30k, and MSCOCO datasets. The experimental analysis demonstrates that the MARNN-FRFICP methodology has improved scalability and performance compared to recent techniques in various measures.

  • Research Article
  • 10.58451/ijebss.v3i7.280
The Influence of Experience Satisfaction on Revisit Intention with Site Image and Content Moderation
  • Oct 11, 2025
  • International Journal of Engineering Business and Social Science
  • Fakia Zikri Abdul Karim + 1 more

The high level of competition in the pizzeria industry in Indonesia, particularly in Bekasi Regency, reflects a situation where customer experience satisfaction is not always directly proportional to the intention of repeat visits. This study aims to analyze the influence of experience satisfaction on revisit intention with site image and content moderation among consumers of fast-food restaurants Pizza Hut, Domino’s Pizza, and Gian Pizza in Bekasi Regency. A quantitative approach was employed, utilizing primary data sources. The research sample consisted of 360 participants selected through purposive sampling techniques based on the non-probability sampling method. Data collected via questionnaires were analyzed using the Partial Least Square–Structural Equation Modeling (PLS-SEM) statistical method with SmartPLS 3.2.9 software. The results reveal that: (1) Experience satisfaction does not have a significant effect on the revisit intention of consumers of Pizza Hut, Domino’s Pizza, and Gian Pizza in Bekasi Regency; (2) Site image does not moderate the relationship between experience satisfaction and revisit intention of pizza restaurant consumers in Bekasi Regency; and (3) Content also does not moderate the relationship between experience satisfaction and revisit intention of pizza restaurant consumers in Bekasi Regency. These findings underscore the importance of restaurant image and content-based marketing strategies in fostering customer loyalty. The implications of the study suggest that enhancing customer satisfaction and strategically utilizing site image and promotional content are essential in increasing revisit intentions.

  • Research Article
  • 10.3389/fpls.2025.1681466
Mechanism of nanomaterial-induced lipid droplet formation in Raphidocelis subcapitata is mediated by charge properties
  • Oct 9, 2025
  • Frontiers in Plant Science
  • Emma Mckeel + 5 more

Increasing the production of renewable energy will be critical to achieving global sustainability goals in the coming decades. Biofuels derived from microalgae have great potential to contribute to this production. However, cultivating algae with sufficient neutral lipid content, while maintaining high growth rates, is a continual challenge in making algal-derived biofuels a reality. Previous work has shown that exposure to polymer-functionalized carbon dots can increase the lipid content of the microalgae Raphidocelis subcapitata. This study investigates this finding, aiming to determine the mechanisms underlying this effect and if altering nanoparticle surface charge mediates the mechanism of action of the carbon dots used. Carbon dots with both negative and positive surface charges were added to microalgal cultures, and the impacts of this exposure were analyzed using high-content imaging, growth measurements, and chlorophyll content measurements. Results indicate that positively charged carbon dots induce a nano-specific increase in lipid content but also cause decreases in growth. Additionally, the mechanism of action of each nanoparticle was examined by conducting a morphological comparison to treatments with known mechanisms of action. This analysis showed that negatively charged carbon dots cause similar impacts to R. subcapitata as nitrogen deprivation. Nitrogen deprivation is known to increase lipid content in microalgae. The findings of this study suggest that carbon dots may have surface charge dependent effects on the lipid metabolism of R. subcapitata. Future work should consider the use of carbon dots with varied surface charge densities for enhancing algae biofuel production in bioreactors.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2025 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers