A Two-Phase Deep Learning Model for Counterfeit Detection of Indian Banknotes using YOLO-NAS and UV Imaging for Visually Impaired People

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

A Two-Phase Deep Learning Model for Counterfeit Detection of Indian Banknotes using YOLO-NAS and UV Imaging for Visually Impaired People

Similar Papers
  • Research Article
  • 10.1200/jco.2025.43.16_suppl.e18023
Comparative diagnostic accuracy of deep learning and hand-crafted radiomics models for detecting lymph node metastases in head and neck cancers: A meta-analysis.
  • Jun 1, 2025
  • Journal of Clinical Oncology
  • Abdur Rehman + 3 more

e18023 Background: Accurate detection of lymph node metastases (LNM) is crucial in the management of head and neck cancers. Artificial intelligence (AI) techniques, including deep learning (DL) and hand-crafted radiomics (HCR) models, have shown potential in improving diagnostic accuracy. This study aims to compare the performance of DL and HCR models for detecting LNM. Methods: Studies employing DL and HCR models for LNM detection were systematically analyzed. Internal validation datasets were utilized due to limited external validation in the literature. Diagnostic performance metrics, including sensitivity, specificity, and area under the curve (AUC), were evaluated using summary receiver operating characteristic (SROC) curves and paired forest plots. Heterogeneity was assessed using the I² statistic, and leave-one-out sensitivity analyses were performed to identify outliers. Data analysis was conducted using R software environment (version 4.2.1, R Foundation for Statistical Computing, Vienna, Austria). Results: The pooled AUCs were 92.1% (95% CI: 84.9–94.7%) for DL models and 90.5% (95% CI: 82.9–91.8%) for HCR models, with no statistically significant difference in diagnostic accuracy (p=0.978). Sensitivities and specificities for DL models were 83.9% (95% CI: 77.6–88.7%) and 87.0% (95% CI: 81.6–91.1%), while HCR models achieved 82.7% (95% CI: 76.9–87.2%) and 86.2% (95% CI: 81.1–90.5%), respectively. Substantial heterogeneity was observed (DL: I² = 72.5–93.1%; HCR: I² = 31.3–65.9%). However, the exclusion of outliers did not significantly alter the results (p=0.981). Conclusions: DL and HCR models exhibit comparable diagnostic accuracy for detecting LNM in head and neck cancers. While both approaches show promise, significant heterogeneity highlighted the need for external validation to confirm their reliability across diverse clinical settings.

  • Research Article
  • Cite Count Icon 4
  • 10.1016/j.pmip.2024.100125
The use of machine learning and deep learning models in detecting depression on social media: A systematic literature review
  • Apr 5, 2024
  • Personalized Medicine in Psychiatry
  • Wadzani Aduwamai Gadzama + 3 more

The use of machine learning and deep learning models in detecting depression on social media: A systematic literature review

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 13
  • 10.1371/journal.pone.0282608
A hybrid CNN and ensemble model for COVID-19 lung infection detection on chest CT scans.
  • Mar 9, 2023
  • PLOS ONE
  • Ahmed A Akl + 3 more

COVID-19 is highly infectious and causes acute respiratory disease. Machine learning (ML) and deep learning (DL) models are vital in detecting disease from computerized chest tomography (CT) scans. The DL models outperformed the ML models. For COVID-19 detection from CT scan images, DL models are used as end-to-end models. Thus, the performance of the model is evaluated for the quality of the extracted feature and classification accuracy. There are four contributions included in this work. First, this research is motivated by studying the quality of the extracted feature from the DL by feeding these extracted to an ML model. In other words, we proposed comparing the end-to-end DL model performance against the approach of using DL for feature extraction and ML for the classification of COVID-19 CT scan images. Second, we proposed studying the effect of fusing extracted features from image descriptors, e.g., Scale-Invariant Feature Transform (SIFT), with extracted features from DL models. Third, we proposed a new Convolutional Neural Network (CNN) to be trained from scratch and then compared to the deep transfer learning on the same classification problem. Finally, we studied the performance gap between classic ML models against ensemble learning models. The proposed framework is evaluated using a CT dataset, where the obtained results are evaluated using five different metrics The obtained results revealed that using the proposed CNN model is better than using the well-known DL model for the purpose of feature extraction. Moreover, using a DL model for feature extraction and an ML model for the classification task achieved better results in comparison to using an end-to-end DL model for detecting COVID-19 CT scan images. Of note, the accuracy rate of the former method improved by using ensemble learning models instead of the classic ML models. The proposed method achieved the best accuracy rate of 99.39%.

  • Research Article
  • 10.1302/1358-992x.2024.18.050
IMPLANT DETECTION AND CLASSIFICATION FROM A SMALL DATASET OF LOWER LIMB RADIOGRAPHS: PERFORMANCE OF DEEP LEARNING MODELS PRE-TRAINED ON LARGER DATASETS
  • Nov 14, 2024
  • Orthopaedic Proceedings
  • F Birkholtz + 3 more

IntroductionInaccurate identification of implants on X-rays may lead to prolonged surgical duration as well as increased complexity and costs during implant removal. Deep learning models may help to address this problem, although they typically require large datasets to effectively train models in detecting and classifying objects, e.g. implants. This can limit applicability for instances when only smaller datasets are available. Transfer learning can be used to overcome this limitation by leveraging large, publicly available datasets to pre-train detection and classification models. The aim of this study was to assess the effectiveness of deep learning models in implant localisation and classification on a lower limb X-ray dataset.MethodFirstly, detection models were evaluated on their ability to localise four categories of implants, e.g. plates, screws, pins, and intramedullary nails. Detection models (Faster R-CNN, YOLOv5, EfficientDet) were pre-trained on the large, freely available COCO dataset (330000 images). Secondly, classification models (DenseNet121, Inception V3, ResNet18, ResNet101) were evaluated on their ability to classify five types of intramedullary nails. Localisation and classification accuracy were evaluated on a smaller image dataset (204 images).ResultThe YOLOv5s model showed the best capacity to detect and distinguish between different types of implants (accuracy: plate=82.1%, screw=72.3%, intramedullary nail=86.9%, pin=79.9%). Screw implants were the most difficult implant to detect, likely due to overlapping screw implants visible in the image dataset. The DenseNet121 classification model showed the best performance in classifying different types of intramedullary nails (accuracy=73.7%). Therefore, a deep learning model pipeline with the YOLOv5s and DenseNet121 was proposed for the most optimal performance of automating implants localisation and classification for a relatively small dataset.ConclusionThese findings support the potential of deep learning techniques in enhancing implant detection accuracy. With further development, AI-based implant identification may benefit patients, surgeons and hospitals through improved surgical planning and efficient use of theatre time.

  • Research Article
  • Cite Count Icon 14
  • 10.1007/s00330-022-08950-w
Lower-extremity fatigue fracture detection and grading based on deep learning models of radiographs.
  • Jun 24, 2022
  • European Radiology
  • Yanping Wang + 9 more

To identify the feasibility of deep learning-based diagnostic models for detecting and assessing lower-extremity fatigue fracture severity on plain radiographs. This retrospective study enrolled 1151 X-ray images (tibiofibula/foot: 682/469) of fatigue fractures and 2842 X-ray images (tibiofibula/foot: 2000/842) without abnormal presentations from two clinical centers. After labeling the lesions, images in a center (tibiofibula/foot: 2539/1180) were allocated at 7:1:2 for model construction, and the remaining images from another center (tibiofibula/foot: 143/131) for external validation. A ResNet-50 and a triplet branch network were adopted to construct diagnostic models for detecting and grading. The performances of detection models were evaluated with sensitivity, specificity, and area under the receiver operating characteristic curve (AUC), while grading models were evaluated with accuracy by confusion matrix. Visual estimations by radiologists were performed for comparisons with models. For the detection model on tibiofibula, a sensitivity of 95.4%/85.5%, a specificity of 80.1%/77.0%, and an AUC of 0.965/0.877 were achieved in the internal testing/external validation set. The detection model on foot reached a sensitivity of 96.4%/90.8%, a specificity of 76.0%/66.7%, and an AUC of 0.947/0.911. The detection models showed superior performance to the junior radiologist, comparable to the intermediate or senior radiologist. The overall accuracy of the diagnostic model was 78.5%/62.9% for tibiofibula and 74.7%/61.1% for foot in the internal testing/external validation set. The deep learning-based models could be applied to the radiological diagnosis of plain radiographs for assisting in the detection and grading of fatigue fractures on tibiofibula and foot. • Fatigue fractures on radiographs are relatively difficult to detect, and apt to be misdiagnosed. • Detection and grading models based on deep learning were constructed on a large cohort of radiographs with lower-extremity fatigue fractures. • The detection model with high sensitivity would help to reduce the misdiagnosis of lower-extremity fatigue fractures.

  • Research Article
  • 10.1186/s13244-025-01922-w
Annotation-efficient, patch-based, explainable deep learning using curriculum method for breast cancer detection in screening mammography
  • Mar 19, 2025
  • Insights into Imaging
  • Ozden Camurdan + 8 more

ObjectivesTo develop an efficient deep learning (DL) model for breast cancer detection in mammograms, utilizing both weak (image-level) and strong (bounding boxes) annotations and providing explainable artificial intelligence (XAI) with gradient-weighted class activation mapping (Grad-CAM), assessed by the ground truth overlap ratio.MethodsThree radiologists annotated a balanced dataset of 1976 mammograms (cancer-positive and -negative) from three centers. We developed a patch-based DL model using curriculum learning, progressively increasing patch sizes during training. The model was trained under varying levels of strong supervision (0%, 20%, 40%, and 100% of the dataset), resulting in baseline, curriculum 20, curriculum 40, and curriculum 100 models. Training for each model was repeated ten times, with results presented as mean ± standard deviation. Model performance was also tested on an external dataset of 4276 mammograms to assess generalizability.ResultsF1 scores for the baseline, curriculum 20, curriculum 40, and curriculum 100 models were 80.55 ± 0.88, 82.41 ± 0.47, 83.03 ± 0.31, and 83.95 ± 0.55, respectively, with ground truth overlap ratios of 60.26 ± 1.91, 62.13 ± 1.2, 62.26 ± 1.52, and 64.18 ± 1.37. In the external dataset, F1 scores were 74.65 ± 1.35, 77.77 ± 0.73, 78.23 ± 1.78, and 78.73 ± 1.25, respectively, maintaining a similar performance trend.ConclusionTraining DL models with a curriculum method and a patch-based approach yields satisfactory performance and XAI, even with a limited set of densely annotated data, offering a promising avenue for deploying DL in large-scale mammography datasets.Critical relevanceThis study introduces a DL model for mammography-based breast cancer detection, utilizing curriculum learning with limited, strongly labeled data. It showcases performance gains and better explainability, addressing challenges of extensive dataset needs and DL’s “black-box” nature.Key PointsIncreasing numbers of mammograms for radiologists to interpret pose a logistical challenge.We trained a DL model leveraging curriculum learning with mixed annotations for mammography.The DL model outperformed the baseline model with image-level annotations using only 20% of the strong labels.The study addresses the challenge of requiring extensive datasets and strong supervision for DL efficacy.The model demonstrated improved explainability through Grad-CAM, verified by a higher ground truth overlap ratio.He proposed approach also yielded robust performance on external testing data.Graphical

  • Research Article
  • Cite Count Icon 2
  • 10.1088/1361-6560/ad953e
BD-StableNet: a deep stable learning model with an automatic lesion area detection function for predicting malignancy in BI-RADS category 3–4A lesions
  • Dec 3, 2024
  • Physics in Medicine & Biology
  • Hui Qu + 8 more

The latest developments combining deep learning technology and medical image data have attracted wide attention and provide efficient noninvasive methods for the early diagnosis of breast cancer. The success of this task often depends on a large amount of data annotated by medical experts, which is time-consuming and may not always be feasible in the biomedical field. The lack of interpretability has greatly hindered the application of deep learning in the medical field. Currently, deep stable learning, including causal inference, make deep learning models more predictive and interpretable. In this study, to distinguish malignant tumors in Breast Imaging-Reporting and Data System (BI-RADS) category 3-4A breast lesions, we propose BD-StableNet, a deep stable learning model for the automatic detection of lesion areas. In this retrospective study, we collected 3103 breast ultrasound images (1418 benign and 1685 malignant lesions) from 493 patients (361 benign and 132 malignant lesion patients) for model training and testing. Compared with other mainstream deep learning models, BD-StableNet has better prediction performance (accuracy = 0.952, area under the curve = 0.982, precision = 0.970, recall = 0.941,F1-score = 0.955 and specificity = 0.965). The lesion area prediction and class activation map results both verify that our proposed model is highly interpretable. The results indicate that BD-StableNet significantly enhances diagnostic accuracy and interpretability, offering a promising noninvasive approach for the diagnosis of BI-RADS category 3-4A breast lesions. Clinically, the use of BD-StableNet could reduce unnecessary biopsies, improve diagnostic efficiency, and ultimately enhance patient outcomes by providing more precise and reliable assessments of breast lesions.

  • Preprint Article
  • 10.1101/2025.06.04.25328868
Rad-Path Correlation of Deep Learning Models for Prostate Cancer Detection on MRI
  • Jun 4, 2025
  • A S C Verde + 16 more

While Deep Learning (DL) models trained on Magnetic Resonance Imaging (MRI) have shown promise for prostate cancer detection, their lack of direct biological validation often undermines radiologists’ trust and hinders clinical adoption. Radiologic-histopathologic (rad-path) correlation has the potential to validate MRI-based lesion detection using digital histopathology. This study uses automated and manually annotated digital histopathology slides as a standard of reference to evaluate the spatial extent of lesion annotations derived from both radiologist interpretations and DL models previously trained on prostate bi-parametric MRI (bp-MRI). 117 histopathology slides were used as reference. Prospective patients with clinically significant prostate cancer performed a bp-MRI examination before undergoing a robotic radical prostatectomy, and each prostate specimen was sliced using a 3D-printed patient-specific mold to ensure a direct comparison between pre-operative imaging and histopathology slides. The histopathology slides and their corresponding T2-weighted MRI images were co-registered. We trained DL models for cancer detection on large retrospective datasets of T2-w MRI only, bp-MRI and histopathology images and did inference in a prospective patient cohort. We evaluated the spatial extent between detected lesions and between detected lesions and the histopathological and radiological ground-truth, using the Dice similarity coefficient (DSC). The DL models trained on digital histopathology tiles and MRI images demonstrated promising capabilities in lesion detection. A low overlap was observed between the lesion detection masks generated by the histopathology and bp-MRI models, with a DSC = 0.10. However, the overlap was equivalent (DSC = 0.08) between radiologist annotations and histopathology ground truth. A rad-path correlation pipeline was established in a prospective patient cohort with prostate cancer undergoing surgery. The correlation between rad-path DL models was low but comparable to the overlap between annotations. While DL models show promise in prostate cancer detection, challenges remain in integrating MRI-based predictions with histopathological findings.

  • Research Article
  • Cite Count Icon 114
  • 10.5455/aim.2019.27.327-332
Deep Transfer Learning Models for Medical Diabetic Retinopathy Detection.
  • Jan 1, 2019
  • Acta Informatica Medica
  • Nour Khalifa + 3 more

Introduction:Diabetic retinopathy (DR) is the most common diabetic eye disease worldwide and a leading cause of blindness. The number of diabetic patients will increase to 552 million by 2034, as per the International Diabetes Federation (IDF).Aim:With advances in computer science techniques, such as artificial intelligence (AI) and deep learning (DL), opportunities for the detection of DR at the early stages have increased. This increase means that the chances of recovery will increase and the possibility of vision loss in patients will be reduced in the future.Methods:In this paper, deep transfer learning models for medical DR detection were investigated. The DL models were trained and tested over the Asia Pacific Tele-Ophthalmology Society (APTOS) 2019 dataset. According to literature surveys, this research is considered one the first studies to use of the APTOS 2019 dataset, as it was freshly published in the second quarter of 2019. The selected deep transfer models in this research were AlexNet, Res-Net18, SqueezeNet, GoogleNet, VGG16, and VGG19. These models were selected, as they consist of a small number of layers when compared to larger models, such as DenseNet and InceptionResNet. Data augmentation techniques were used to render the models more robust and to overcome the overfitting problem.Results:The testing accuracy and performance metrics, such as the precision, recall, and F1 score, were calculated to prove the robustness of the selected models. The AlexNet model achieved the highest testing accuracy at 97.9%. In addition, the achieved performance metrics strengthened our achieved results. Moreover, AlexNet has a minimum number of layers, which decreases the training time and the computational complexity.

  • Conference Article
  • Cite Count Icon 13
  • 10.1109/icais56108.2023.10073683
Automated Text-based Depression Detection using Hybrid ConvLSTM and Bi-LSTM Model
  • Feb 2, 2023
  • Neda Firoz + 4 more

Depression and its symptoms are very common disorders of mental health. They affect the day-to-day activity of the person and degrade the quality of life. The article presents the comparative study of different deep learning models on natural language processing data for detection of depression using textual data. Several studies have been performed for depression detection using artificial intelligence and deep learning state of the art methods. This article investigates the state-of-the-art models and perform hyperparameter tuning for best accuracy results and develop our own hybrid model for detection of depression with improved accuracy scores. The aim of our study is to research and compare the existing findings in deep learning and machine learning models for depression detection and build a precise hybrid model for depression detection with higher accuracy and scores.

  • Research Article
  • Cite Count Icon 33
  • 10.1016/j.aej.2022.12.009
Real-time driver distraction recognition: A hybrid genetic deep network based approach
  • Dec 17, 2022
  • Alexandria Engineering Journal
  • Abeer A Aljohani

Distracting while driving is a serious issue that causes serious direct and indirect harm to the society. To avoid these problems, detecting dangerous drivers’ behaviour is very important.This research focuses on detecting driver behaviour with a combination of artificial deep learning and machine learning models with genetic algorithm. Most of the previous works have focused on using convolutional neural network as deep learning model or support vector machine as machine learning model for actions detection of drivers from input images. The proposed structure will use genetic algorithms to first choose the structure of feature extractor from famous CNN models such as VGG19, ResNet50, and DenseNet121. After mentioning feature extractor, proposed framework contains two layer of dense layer for classification as a deep learning model. On the machine learning side K nearest neighbor, random forest, support vector machine, and extreme boost algorithms have been used as classifiers. Genetic algorithms will specify number of neurons and activation functions of these layers for deep learning and hyperparameters such as number of estimators for machine learning models. Proposed model has been developed with the use of state farm dataset that contains information of 1 safe driving class and 9 dangerous behaviours such as texting while driving, talking with passengers, drinking, etc. Experimental results indicate 99.80% accuracy for classification of the state farm distracted driver detection with combination of genetic algorithms and deep neural networks. Compared to similar research, the proposed approach has shown superior results for classification of state farm distracted driver detection. Proposed approach chooses the feature extraction model and hyperparameters of the classification layer automatically. Thus it can be used for driving behaviour classification with seeing new situation too. Proposed framework can be used as a real time driver’s distraction detection to decrease car traffic accidents and alleviate corresponding damages to the drivers.

  • Conference Article
  • Cite Count Icon 18
  • 10.1145/3378936.3378968
Performance Comparison of Deep Learning Models for Black Lung Detection on Chest X-ray Radiographs
  • Jan 12, 2020
  • Liton Devnath + 3 more

Black Lung (BL) is an incurable respiratory disease caused by long term inhalation of respirable coal dust. Confidentiality restrictions and disease incidence limit the availability of BL datasets, which presents significant challenges in the training of deep learning (DL) models. This paper presents the implementations and detailed performance comparison of seven DL models for BL detection with small datasets. The models include VGG16, VGG19, InceptionV3, Xception, ResNet50, DenseNet121 and CheXNet. A small BL dataset of real and synthetic images was used to train the seven deep learning models. Segmented lung X-ray images, with and without BL, were used as training images to establish a benchmark. To increase the number of images required for training a deep learning system the training data set was augmented, using a Cycle-Consistent Adversarial Networks (CycleGAN) and the Keras Image Data Generator, to generate additional augmented and synthetic radiographs. The effects of different dropout nodes as a blocking factor was also investigated on all seven models. The best sensitivity (Normal Prediction Rate), specificity (BL prediction Rate), error rate (ERR or incorrect prediction rate), accuracy (1-ERR), as well as total execution time for binary classification for each model, with and without augmentation, was compared for optimal BL detection. On average, the CheXNet model gave the best performance of all seven DL models.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 9
  • 10.3389/fpls.2024.1383863
Lightweight cotton diseases real-time detection model for resource-constrained devices in natural environments.
  • Jun 6, 2024
  • Frontiers in plant science
  • Pan Pan + 7 more

Cotton, a vital textile raw material, is intricately linked to people's livelihoods. Throughout the cotton cultivation process, various diseases threaten cotton crops, significantly impacting both cotton quality and yield. Deep learning has emerged as a crucial tool for detecting these diseases. However, deep learning models with high accuracy often come with redundant parameters, making them challenging to deploy on resource-constrained devices. Existing detection models struggle to strike the right balance between accuracy and speed, limiting their utility in this context. This study introduces the CDDLite-YOLO model, an innovation based on the YOLOv8 model, designed for detecting cotton diseases in natural field conditions. The C2f-Faster module replaces the Bottleneck structure in the C2f module within the backbone network, using partial convolution. The neck network adopts Slim-neck structure by replacing the C2f module with the GSConv and VoVGSCSP modules, based on GSConv. In the head, we introduce the MPDIoU loss function, addressing limitations in existing loss functions. Additionally, we designed the PCDetect detection head, integrating the PCD module and replacing some CBS modules with PCDetect. Our experimental results demonstrate the effectiveness of the CDDLite-YOLO model, achieving a remarkable mean average precision (mAP) of 90.6%. With a mere 1.8M parameters, 3.6G FLOPS, and a rapid detection speed of 222.22 FPS, it outperforms other models, showcasing its superiority. It successfully strikes a harmonious balance between detection speed, accuracy, and model size, positioning it as a promising candidate for deployment on an embedded GPU chip without sacrificing performance. Our model serves as a pivotal technical advancement, facilitating timely cotton disease detection and providing valuable insights for the design of detection models for agricultural inspection robots and other resource-constrained agricultural devices.

  • Research Article
  • Cite Count Icon 3
  • 10.1007/s00234-023-03170-5
Validation of a deep learning model for traumatic brain injury detection and NIRIS grading on non-contrast CT: a multi-reader study with promising results and opportunities for improvement.
  • Jun 3, 2023
  • Neuroradiology
  • Bin Jiang + 14 more

This study aimed to assess and externally validate the performance of a deep learning (DL) model for the interpretation of non-contrast computed tomography (NCCT) scans of patients with suspicion of traumatic brain injury (TBI). This retrospective and multi-reader study included patients with TBI suspicion who were transported to the emergency department and underwent NCCT scans. Eight reviewers, with varying levels of training and experience (two neuroradiology attendings, two neuroradiology fellows, two neuroradiology residents, one neurosurgery attending, and one neurosurgery resident), independently evaluated NCCT head scans. The same scans were evaluated using the version 5.0 of the DL model icobrain tbi. The establishment of the ground truth involved a thorough assessment of all accessible clinical and laboratory data, as well as follow-up imaging studies, including NCCT and magnetic resonance imaging, as a consensus amongst the study reviewers. The outcomes of interest included neuroimaging radiological interpretation system (NIRIS) scores, the presence of midline shift, mass effect, hemorrhagic lesions, hydrocephalus, and severe hydrocephalus, as well as measurements of midline shift and volumes of hemorrhagic lesions. Comparisons using weighted Cohen's kappa coefficient were made. The McNemar test was used to compare the diagnostic performance. Bland-Altman plots were used to compare measurements. One hundred patients were included, with the DL model successfully categorizing 77 scans. The median age for the total group was 48, with the omitted group having a median age of 44.5 and the included group having a median age of 48. The DL model demonstrated moderate agreement with the ground truth, trainees, and attendings. With the DL model's assistance, trainees' agreement with the ground truth improved. The DL model showed high specificity (0.88) and positive predictive value (0.96) in classifying NIRIS scores as 0-2 or 3-4. Trainees and attendings had the highest accuracy (0.95). The DL model's performance in classifying various TBI CT imaging common data elements was comparable to that of trainees and attendings. The average difference for the DL model in quantifying the volume of hemorrhagic lesions was 6.0mL with a wide 95% confidence interval (CI) of - 68.32 to 80.22, and for midline shift, the average difference was 1.4mm with a 95% CI of - 3.4 to 6.2. While the DL model outperformed trainees in some aspects, attendings' assessments remained superior in most instances. Using the DL model as an assistive tool benefited trainees, improving their NIRIS score agreement with the ground truth. Although the DL model showed high potential in classifying some TBI CT imaging common data elements, further refinement and optimization are necessary to enhance its clinical utility.

  • Research Article
  • 10.1016/j.procs.2023.10.376
A survey on pre-training requirements for deep learning models to detect obstructive sleep apnea events
  • Jan 1, 2023
  • Procedia Computer Science
  • Ángel Serrano Alarcón + 3 more

A survey on pre-training requirements for deep learning models to detect obstructive sleep apnea events

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.