• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Paper
Search Paper
Cancel
Ask R Discovery Chat PDF
Explore

Feature

  • menu top paper My Feed
  • library Library
  • translate papers linkAsk R Discovery
  • chat pdf header iconChat PDF
  • audio papers link Audio Papers
  • translate papers link Paper Translation
  • chrome extension Chrome Extension

Content Type

  • preprints Preprints
  • conference papers Conference Papers
  • journal articles Journal Articles

More

  • resources areas Research Areas
  • topics Topics
  • resources Resources

Deep Learning-based Model Research Articles

  • Share Topic
  • Share on Facebook
  • Share on Twitter
  • Share on Mail
  • Share on SimilarCopy to clipboard
Follow Topic R Discovery
By following a topic, you will receive articles in your feed and get email alerts on round-ups.
Overview
3611 Articles

Published in last 50 years

Related Topics

  • Deep Learning Models
  • Deep Learning Models
  • Learning-based Model
  • Learning-based Model
  • Deep Model
  • Deep Model

Articles published on Deep Learning-based Model

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
3545 Search results
Sort by
Recency
Leveraging protein language models for cross-variant CRISPR/Cas9 sgRNA activity prediction.

Accurate prediction of single-guide RNA (sgRNA) activity is crucial for optimizing the CRISPR/Cas9 gene-editing system, as it directly influences the efficiency and accuracy of genome modifications. However, existing prediction methods mainly rely on large-scale experimental data of a single Cas9 variant to construct Cas9 protein (variants)-specific sgRNA activity prediction models, which limits their generalization ability and prediction performance across different Cas9 protein (variants), as well as their scalability to the continuously discovered new variants. In this study, we proposed PLM-CRISPR, a novel deep learning-based model that leverages protein language models to capture Cas9 protein (variants) representations for cross-variant sgRNA activity prediction. PLM-CRISPR uses tailored feature extraction modules for both sgRNA and protein sequences, incorporating a cross-variant training strategy and a dynamic feature fusion mechanism to effectively model their interactions. Extensive experiments demonstrate that PLM-CRISPR outperforms existing methods across datasets spanning 7 Cas9 protein (variants) in three real-world scenarios, demonstrating its superior performance in handling data-scarce situations, including cases with few or no samples for novel variants. Comparative analyses with traditional machine learning and deep learning models further confirm the effectiveness of PLM-CRISPR. Additionally, motif analysis reveals that PLM-CRISPR accurately identifies high-activity sgRNA sequence patterns across diverse Cas9 protein (variants). Overall, PLM-CRISPR provides a robust, scalable, and generalizable solution for sgRNA activity prediction across diverse Cas9 protein (variants). The source code can be obtained from https://github.com/CSUBioGroup/PLM-CRISPR. Supplementary data are available at Bioinformatics online.

Read full abstract
  • Journal IconBioinformatics (Oxford, England)
  • Publication Date IconJul 2, 2025
  • Author Icon Yalin Hou + 6
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

A big data driven multilevel deep learning framework for predicting terrorist attacks

In recent years, terrorism has increasingly threatened human security, causing violence, fear, and damage to both the general public and specific targets. These attacks create unrest among individuals and within society. Leveraging the recent advancements in deep machine learning, several intelligent systems have been developed to predict terrorist attacks. However, existing state-of-the-art models are limited, lack support for big data, suffer from accuracy issues, and require extensive modifications. Therefore, to fill this gap, herein, we propose an integrated Big Data deep learning-based predictive model to predict the probability of a terrorist attack. We treat the series of terrorist activities as a sequence modeling problem and propose a big data long short-term memory network. It is a layered model capable of processing large-scale data. Our proposed model can learn from past events and forecast future attacks. The proposed model predicts the likely location of future attacks at the city, country, and regional levels. The experimental study of the proposed model was carried out on the samples in the global terrorism dataset, and promising results are reported on a number of standard evaluation metrics, accuracy, precision, Recall, and F1 score. The obtained results suggest that the proposed model contributes substantially to predicting the probability of an attack at a particular location. The identification of potential locations of an attack allows law enforcement agencies to take suitable preventative measures to combat terrorism effectively.

Read full abstract
  • Journal IconScientific Reports
  • Publication Date IconJul 2, 2025
  • Author Icon Ume Kalsooma + 6
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

A deep learning-based image analysis model for automated scoring of horizontal ocular movement disorders

IntroductionThis study proposes a deep learning–based image analysis method for automated scoring of the severity of horizontal ocular movement disorders and evaluates its performance against traditional manual scoring methods.MethodsA total of 2,565 ocular images were prospectively collected from 164 patients with ocular movement disorders and 121 healthy subjects. These images were labeled and used as the training set for the RetinaEye automatic scoring model. Additionally, 184 binocular gaze images (left and right turns) were collected from 92 patients with limited horizontal ocular movement, serving as the test set. Manual and automatic scoring were performed on the test set using ImageJ and RetinaEye, respectively. Furthermore, the consistency and correlation between the two scoring methods were assessed.ResultsRetinaEye successfully identified the centers of both pupils, as well as the positions of the medial and lateral canthi. It also automatically calculated the horizontal ocular movement scores based on the pixel coordinates of these key points. The model demonstrated high accuracy in identifying key points, particularly the lateral canthi. In the test group, manual and automated scoring results showed a high level of consistency and positive correlation among all affected oculi (κ = 0.860, p < 0.001; ρ = 0.897, p < 0.001).ConclusionThe automatic scoring method based on RetinaEye demonstrated high consistency with manual scoring results. This new method objectively assesses the severity of horizontal ocular movement disorders and holds great potential for diagnosis and treatment selection.

Read full abstract
  • Journal IconFrontiers in Neurology
  • Publication Date IconJul 2, 2025
  • Author Icon Xiao-Lu Jin + 4
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

A Dynamic Kalman Filtering Method for Multi-Object Fruit Tracking and Counting in Complex Orchards

With the rapid development of agricultural intelligence in recent years, automatic fruit detection and counting technologies have become increasingly significant for optimizing orchard management and advancing precision agriculture. However, existing deep learning-based models are primarily designed to process static and single-frame images, thereby failing to meet the large-scale detection and counting demands in the dynamically changing scenes of modern orchards. To address these challenges, this paper proposes a multi-object fruit tracking and counting method, which integrates an improved YOLO-based object detection algorithm with a dynamically optimized Kalman filter. By optimizing the network structure, the improved YOLO detection model provides high-quality detection results for subsequent tracking tasks. Then a modified Kalman filter with a variable forgetting factor is integrated to dynamically adjust the weighting of historical data, enabling the model to adapt to changes in observation and motion noise. Moreover, fruit targets are associated using a combined strategy based on Intersection over Union (IoU) and Re-Identification (Re-ID) features, improving the accuracy and stability of object matching. Consequently, the continuous tracking and precise counting of fruits in video sequences are achieved. Experimental results with image frames of fruits in video sequence are demonstrated, showing that the proposed method performs robust and continuous tracking (MOTA of 95.0% and HOTA of 82.4%). For fruit counting, the method attains a high coefficient-of-determination of 0.85 and a low root-mean-square error (RMSE) of 1.57, exhibiting high accuracy and stability of fruit detection, tracking and counting in video sequences under complex orchard environments.

Read full abstract
  • Journal IconSensors
  • Publication Date IconJul 2, 2025
  • Author Icon Yaning Zhai + 4
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Development and clinical validation of deep learning-based immunohistochemistry prediction models for subtyping and staging of gastrointestinal cancers

BackgroundImmunohistochemistry (IHC) is a critical tool for tumor diagnosis and treatment, but it is time and tissue consuming, and highly dependent on skilled laboratory technicians. Recently, deep learning-based IHC biomarker prediction models have been widely developed, but few investigations have explored their clinical application effectiveness.MethodsIn this study, we aimed to create an automatic pipeline for the construction of deep learning models to generate AI-IHC (Artificial Intelligence) output using H&E whole slide images (WSIs) and compared the pathology reports by pathologists on AI-IHC versus conventional IHC. We obtained 134 WSIs including H&E and IHC pairs, and automatically extracted 415,463 tiles from H&E slides for model construction based on the annotation transfer from IHC slides. Five IHC biomarker prediction models (P40, Pan-CK, Desmin, P53, Ki-67) were developed to support a range of clinically relevant diagnostic applications across various gastrointestinal cancer subtypes, including esophageal, gastric, and colorectal cancers. The Ki-67 proliferation index was quantitatively assessed using digital image analysis.ResultsThe AUCs of five IHC biomarker models ranged from 0.90 to 0.96 and the accuracies were between 83.04 and 90.81%. Additional 150 WSIs from 30 patients were collected to assess the effectiveness of AI-IHC through the multi-reader multi-case (MRMC) study. Each case was read by three pathologists, once on AI-IHC and once on conventional IHC with a minimum 2-week washout period. The results indicate that the consistency rates of pathologists in AI and conventional IHC cases were high in Desmin, Pan-CK and P40 (96.67-100%) while moderate in the P53 (70.00%). We also evaluated the T-stage through the staining of these IHC biomarkers and the consistency rate was 86.36%. Furthermore, the Ki-67 proliferation index, as reported by AI-IHC, showed a variability ranging from 17.35% ±16.2% compared to conventional IHC, with ICC of 0.415 (P = 0.015) between these two groups.ConclusionsHere, we leveraged automatic tile-level annotations from H&E slides to efficiently develop deep learning–based IHC biomarker models, achieving AUCs between 0.90 and 0.96. AI generated IHC showed substantial concordance with conventional IHC across most markers, supporting its potential as an assistive tool in routine diagnostics.

Read full abstract
  • Journal IconBMC Gastroenterology
  • Publication Date IconJul 1, 2025
  • Author Icon Junxiao Wang + 9
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Ink classification in historical documents using hyperspectral imaging and machine learning methods.

Ink classification in historical documents using hyperspectral imaging and machine learning methods.

Read full abstract
  • Journal IconSpectrochimica acta. Part A, Molecular and biomolecular spectroscopy
  • Publication Date IconJul 1, 2025
  • Author Icon Ana Belén López-Baldomero + 4
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Stochastic characteristics of vehicle-bridge vibration under earthquakes with parameter uncertainty: A deep learning-based model

Stochastic characteristics of vehicle-bridge vibration under earthquakes with parameter uncertainty: A deep learning-based model

Read full abstract
  • Journal IconStructures
  • Publication Date IconJul 1, 2025
  • Author Icon Mengxue Yang + 2
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

A comparison of an integrated and image-only deep learning model for predicting the disappearance of indeterminate pulmonary nodules.

A comparison of an integrated and image-only deep learning model for predicting the disappearance of indeterminate pulmonary nodules.

Read full abstract
  • Journal IconComputerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society
  • Publication Date IconJul 1, 2025
  • Author Icon Jingxuan Wang + 6
Open Access Icon Open AccessJust Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Application of deep learning-based facial pain recognition model for postoperative pain assessment.

Application of deep learning-based facial pain recognition model for postoperative pain assessment.

Read full abstract
  • Journal IconJournal of clinical anesthesia
  • Publication Date IconJul 1, 2025
  • Author Icon Ji-Tuo Zhang + 4
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Generating Inverse Feature Space for Class Imbalance in Point Cloud Semantic Segmentation.

Point cloud semantic segmentation can enhance the understanding of the production environment and is a crucial component of vision tasks. The efficacy and generalization prowess of deep learning-based segmentation models are inherently contingent upon the quality and nature of the data employed in their training. However, it is often challenging to obtain data with inter-class balance, and training an intelligent segmentation network with the imbalanced data may cause cognitive bias. In this paper, a network framework InvSpaceNet is proposed, which generates an inverse feature space to alleviate the cognitive bias caused by imbalanced data. Specifically, we design a dual-branch training architecture that combines the superior feature representations derived from instance-balanced sampling data with the cognitive corrections introduced by the proposed inverse sampling data. In the inverse feature space of the point cloud generated by the auxiliary branch, the central points aggregated by class are constrained by the contrastive loss. To refine the class cognition in the inverse feature space, features are used to generate point cloud class prototypes through momentum update. These class prototypes from the inverse space are utilized to generate feature maps and structure maps that are aligned with the positive feature space of the main branch segmentation network. The training of the main branch is dynamically guided through gradients back propagated from different losses. Extensive experiments conducted on four large benchmarks (i.e., S3DIS, ScanNet v2, Toronto-3D, and SemanticKITTI) demonstrate that the proposed method can effectively mitigate point cloud imbalance issues and improve segmentation performance.

Read full abstract
  • Journal IconIEEE transactions on pattern analysis and machine intelligence
  • Publication Date IconJul 1, 2025
  • Author Icon Jiawei Han + 4
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

DSNet enables feature fusion and detail restoration for accurate object detection in foggy conditions

In real-world scenarios, adverse weather conditions can significantly degrade the performance of deep learning-based object detection models. Specifically, fog reduces visibility, complicating feature extraction and leading to detail loss, which impairs object localization and classification. Traditional approaches often apply image dehazing techniques before detection to enhance degraded images; however, these processed images often retain a rough appearance with a loss of detail. To address these challenges, we propose a novel network, DehazeSRNet(DSNet), which is designed to optimize feature transmission and restore lost image details. First, DSNet utilizes the dehaze fusion network (DFN) to learn dehazing features, applying differentiated processing weights to regions with light and dense fog. Second, to enhance feature transmission, DSNet introduces the MistClear Attention (MCA) module, which is based on a re-parameterized channel-shuffle attention mechanism and effectively optimizes feature information transfer and fusion. Finally, to restore image details, we design the hybrid pixel activation transformer (HPAT), which combines channel attention and window-based self-attention mechanisms to activate additional pixel regions. Experimental results on the Foggy Cityscapes, RTTS, DAWN, and rRain datasets demonstrate that DSNet significantly outperforms existing methods in accuracy and achieves exceptional real-time performance, reaching 78.1 frames per second (FPS), highlighting its potential for practical applications in dynamic environments. As a robust detection framework, DSNet offers theoretical insights and practical references for future research on object detection under adverse weather conditions.

Read full abstract
  • Journal IconScientific Reports
  • Publication Date IconJul 1, 2025
  • Author Icon Zhiyong Jing + 5
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

PRDTinyML: deep learning-based TinyML-based pedestrian detection model in autonomous vehicles for smart cities

Detecting pedestrians and cars in smart cities is a major task for autonomous vehicles (AV) to prevent accidents. Occlusion, distortion, and multi-instance pictures make pedestrian and rider detection difficult. Recently, deep learning (DL) systems have shown promise for AV pedestrian identification. The restricted resources of internet of things (IoT) devices have made it difficult to integrate DL with pedestrian detection. Tiny machine learning (TinyML) was used to recognize pedestrians and cyclists in the EuroCity persons (ECP) dataset. After preliminary testing, we propose five microcontroller-deployable lightweight DL models in this study. We applied SqueezeNet, AlexNet, and convolution neural network (CNN) DL models. We also use two pre-trained models, MobileNet-V2 and MobileNet-V3, to determine the optimal size and accuracy model. Quantization aware training (QAT), full integer quantization (FIQ), and dynamic range quantization (DRQ) were used. The CNN model had the shortest size with 0.07 MB using the DRQ approach, followed by SqueezeNet, AlexNet, MobileNet-V2, and MobileNet-V2 with 0.161 MB, 0.69 MB, 1.824 MB, and 1.95 MB, respectively. The MobileNet-V3 model’s DRQ accuracy after optimization was 99.60% for day photos and 98.86% for night images, outperforming other models. The MobileNet-V2 model followed with DRQ accuracy of 99.27% and 98.24% for day and night images.

Read full abstract
  • Journal IconIndonesian Journal of Electrical Engineering and Computer Science
  • Publication Date IconJul 1, 2025
  • Author Icon Norah N Alajlan + 2
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

CO2 sequestration and mineralization in basalts: Insights from a deep learning-based surrogate model

CO2 sequestration and mineralization in basalts: Insights from a deep learning-based surrogate model

Read full abstract
  • Journal IconEngineering Geology
  • Publication Date IconJul 1, 2025
  • Author Icon Weiquan Ouyang + 4
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Deep learning for gender estimation using hand radiographs: a comparative evaluation of CNN models

BackgroundAccurate gender estimation plays a crucial role in forensic identification, especially in mass disasters or cases involving fragmented or decomposed remains where traditional skeletal landmarks are unavailable. This study aimed to develop a deep learning-based model for gender classification using hand radiographs, offering a rapid and objective alternative to conventional methods.MethodsWe analyzed 470 left-hand X-ray images from adults aged 18 to 65 years using four convolutional neural network (CNN) architectures: ResNet-18, ResNet-50, InceptionV3, and EfficientNet-B0. Following image preprocessing and data augmentation, models were trained and validated using standard classification metrics: accuracy, precision, recall, and F1 score. Data augmentation included random rotation, horizontal flipping, and brightness adjustments to enhance model generalization.ResultsAmong the tested models, ResNet-50 achieved the highest classification accuracy (93.2%) with precision of 92.4%, recall of 93.3%, and F1 score of 92.5%. While other models demonstrated acceptable performance, ResNet-50 consistently outperformed them across all metrics. These findings suggest CNNs can reliably extract sexually dimorphic features from hand radiographs.ConclusionsDeep learning approaches, particularly ResNet-50, provide a robust, scalable, and efficient solution for gender prediction from hand X-ray images. This method may serve as a valuable tool in forensic scenarios where speed and reliability are critical. Future research should validate these findings across diverse populations and incorporate explainable AI techniques to enhance interpretability.

Read full abstract
  • Journal IconBMC Medical Imaging
  • Publication Date IconJul 1, 2025
  • Author Icon Hilal Er Ulubaba + 4
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Deep learning-based cough classification using application-recorded sounds: a transfer learning approach with VGGish

BackgroundCoughing sounds contain various bio-metric information with regards to respiratory diseases that can help in the assessment of respiratory diseases. While clinicians find coughs insightful, non-experts struggle to identify abnormalities in cough sounds. Furthermore, respiratory diseases has characterized by widespread health complications and elevated mortality rates, the development of early diagnostic systems is imperative for ensuring timely intervention and improving outcomes for both clinicians and patients. Accordingly, we propose a deep learning–based model for early diagnosis. To enhance the reliability of the training data, we utilized annotations provided by multiple medical specialists. Additionally, we examined how clinical expertise and diagnostic input influence the model’s generalization performance.MethodsThis study introduces a deep learning framework utilizing VGGish as a transfer learning model, enhanced with additional detection and classification networks. The detection model identifies cough events within recorded audio, and then the classification model determines whether a detected cough is normal or abnormal. Both models were trained on raw cough sound data collected via smartphones and labeled by medical experts through a rigorous inspection process.ResultsExperimental evaluations demonstrated that the cough detection model achieved an average accuracy of 0.9883, while the cough classification model attained accuracies of 0.8417, 0.8629, and 0.8662 among dataset1, 2, and 3. To enhance interpretability, we applied Grad-CAM to visualize the features that influenced the model’s decision-making. Model performance was further evaluated using the area under the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (AUPRC).ConclusionsOur proposed cough classification model has the potential to assist individuals with limited access to healthcare as well as medical professionals with limited experience in diagnosing cough-related conditions. By leveraging deep learning and smartphone-recorded cough sounds, this approach aims to enhance early detection and management of respiratory diseases.

Read full abstract
  • Journal IconBMC Medical Informatics and Decision Making
  • Publication Date IconJul 1, 2025
  • Author Icon Sanghoon Han + 10
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Nested U-Net-Based GAN Model for Super-Resolution of Stained Light Microscopy Images

The purpose of this study was to propose a deep learning-based model for the super-resolution reconstruction of stained light microscopy images. To achieve this, perceptual loss was applied to the generator to reflect multichannel signal intensity, distribution, and structural similarity. A nested U-Net architecture was employed to address the representational limitations of the conventional U-Net. For quantitative evaluation, the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and correlation coefficient (CC) were calculated. In addition, intensity profile analysis was performed to assess the model’s ability to restore the boundary signals more precisely. The experimental results demonstrated that the proposed model outperformed both the signal and structural restoration compared to single U-Net and U-Net-based generative adversarial network (GAN) models. Consequently, the PSNR, SSIM, and CC values demonstrated relative improvements of approximately 1.017, 1.023, and 1.010 times, respectively, compared to the input images. In particular, the intensity profile analysis confirmed the effectiveness of the nested U-Net-based generator in restoring cellular boundaries and structures in the stained microscopy images. In conclusion, the proposed model effectively enhanced the resolution of stained light microscopy images acquired in a multichannel format.

Read full abstract
  • Journal IconPhotonics
  • Publication Date IconJul 1, 2025
  • Author Icon Seong-Hyeon Kang + 1
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

A deep learning framework for reconstructing Breast Amide Proton Transfer weighted imaging sequences from sparse frequency offsets to dense frequency offsets.

A deep learning framework for reconstructing Breast Amide Proton Transfer weighted imaging sequences from sparse frequency offsets to dense frequency offsets.

Read full abstract
  • Journal IconComputerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society
  • Publication Date IconJul 1, 2025
  • Author Icon Qiuhui Yang + 12
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

EAR PATHOLOGIES USING DEEP LEARNING ON OTOSCOPIC IMAGES

In this study, the performance of different deep learning architectures is comparatively analyzed for the classification of ear pathologies based on otoscopic images. The dataset included four basic classes: chronic otitis media, ear wax obstruction, myringosclerosis and normal ear structure. The images were normalized at 224×224-pixel resolution and made suitable for the model, and classification was performed using CNN, CNN-LSTM, DenseNet121, ResNet50 and EfficientNet architectures. During the training and validation phases, performance metrics such as accuracy, F1 score, precision, recall and loss values were calculated, and the class discrimination power of the models was evaluated with ROC curves and complexity matrices. According to the results, CNN+LSTM and DenseNet121 architectures showed the best performance with over 94% accuracy and high F1 score in both training and validation sets. Some transfer learning-based architectures such as EfficientNet and ResNet50 showed low generalization performance. This study demonstrates the effectiveness of deep learning-based models for computerized diagnosis of intra-ear diseases and provides an important basis for decision support systems to be developed in this field.

Read full abstract
  • Journal IconUluslararası Sürdürülebilir Mühendislik ve Teknoloji Dergisi
  • Publication Date IconJun 30, 2025
  • Author Icon Yasin Tatlı
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

A Deep Learning-Based De-Artifact Diffusion Model for Removing Motion Artifacts in Knee MRI.

Motion artifacts are common for knee MRI, which usually lead to rescanning. Effective removal of motion artifacts would be clinically useful. To construct an effective deep learning-based model to remove motion artifacts for knee MRI using real-world data. Retrospective. Model construction: 90 consecutive patients (1997 2D slices) who had knee MRI images with motion artifacts paired with immediately rescanned images without artifacts served as ground truth. Internal test dataset: 25 patients (795 slices) from another period; external test dataset: 39 patients (813 slices) from another hospital. 3-T/1.5-T knee MRI with T1-weighted imaging, T2-weighted imaging, and proton-weighted imaging. A deep learning-based supervised conditional diffusion model was constructed. Objective metrics (root mean square error [RMSE], peak signal-to-noise ratio [PSNR], structural similarity [SSIM]) and subjective ratings were used for image quality assessment, which were compared with three other algorithms (enhanced super-resolution [ESR], enhanced deep super-resolution, and ESR using a generative adversarial network). Diagnostic performance of the output images was compared with the rescanned images. The Kappa Test, Pearson chi-square test, Fredman's rank-sum test, and the marginal homogeneity test. A p value < 0.05 was considered statistically significant. Subjective ratings showed significant improvements in the output images compared to the input, with no significant difference from the ground truth. The constructed method demonstrated the smallest RMSE (11.44 5.47 in the validation cohort; 13.95 4.32 in the external test cohort), the largest PSNR (27.61 3.20 in the validation cohort; 25.64 2.67 in the external test cohort) and SSIM (0.97 0.04 in the validation cohort; 0.94 0.04 in the external test cohort) compared to the other three algorithms. The output images achieved comparable diagnostic capability as the ground truth for multiple anatomical structures. The constructed model exhibited feasibility and effectiveness, and outperformed multiple other algorithms for removing motion artifacts in knee MRI. Level 3. Stage 2.

Read full abstract
  • Journal IconJournal of magnetic resonance imaging : JMRI
  • Publication Date IconJun 30, 2025
  • Author Icon Yingchun Li + 9
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

3D Auto-segmentation of pancreas cancer and surrounding anatomical structures for surgical planning.

This multicenter study aimed to develop a deep learning-based autosegmentation model for pancreatic cancer and surrounding anatomical structures using computed tomography (CT) to enhance surgical planning. We included patients with pancreatic cancer who underwent pancreatic surgery at three tertiary referral hospitals. A hierarchical Swin Transformer V2 model was implemented to segment the pancreas, pancreatic cancers, and peripancreatic structures from preoperative contrast-enhanced CT scans. Data was divided into training and internal validation sets at a 3:1 ratio (from one tertiary institution), with separately prepared external validation set (from two separate institutions). Segmentation performance was quantitatively assessed using the dice similarity coefficient (DSC) and qualitatively evaluated (complete vs partial vs absent). A total of 275 patients (51.6% male, mean age 65.8±9.5years) were included (176 training group, 59 internal validation group, and 40 external validation group). No significant differences in baseline characteristics were observed between the groups. The model achieved an overall mean DSC of 75.4±6.0 and 75.6±4.8 in the internal and external validation groups, respectively. It showed high accuracy particularly in the pancreas parenchyma (84.8±5.3 and 86.1±4.1) and lower accuracy in pancreatic cancer (57.0±28.7 and 54.5±23.5). The DSC scores for pancreatic cancer tended to increase with larger tumor sizes. Moreover, the qualitative assessments revealed high accuracy in the superior mesenteric artery (complete segmentation, 87.5%-100%), portal and superior mesenteric vein (97.5%-100%), pancreas parenchyma (83.1%-87.5%), but lower accuracy in cancers (62.7%-65.0%). The deep learning-based autosegmentation model for 3D visualization of pancreatic cancer and peripancreatic structures showed robust performance. Further improvement will enhance many promising applications in clinical research.

Read full abstract
  • Journal IconInternational journal of surgery (London, England)
  • Publication Date IconJun 27, 2025
  • Author Icon Jinsoo Rhu + 8
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2025 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers