Deep learning-based model for detection of intracranial waveforms with poor brain compliance in southern Thailand
BackgroundIntracranial pressure (ICP) waveform analysis provides critical insights into brain compliance and can aid in the early detection of neurological deterioration. Deep learning (DL) has recently emerged as an effective approach for analyzing complex medical signals and imaging data. The aim of the present research was to develop a DL-based model for detecting ICP waveforms indicative of poor brain compliance.MethodsA retrospective cohort study was conducted using ICP wave images collected from postoperative hydrocephalus (HCP) patients who underwent ventriculostomy. The images were categorized into normal and poor compliance waveforms. Precision, recall, mean average precision at the 0.5 intersection over union (mAP_0.5), and the area under the receiver operating characteristic curve (AUC) were used to test.ResultsThe dataset consisted of 2,744 ICP wave images from 21 HCP patients. The best-performing model achieved a precision of 0.97, a recall of 0.96, and a mAP_0.5 of 0.989. The confusion matrix for poor brain compliance waveform detection using the test dataset also demonstrated a high classification accuracy, with true positive and true negative rates of 48.5% and 47.8%, respectively. Additionally, the model demonstrated high accuracy, achieving a mAP_0.5 of 0.994, sensitivity of 0.956, specificity of 0.970, and an AUC of 0.96 in the detection of poor compliance waveforms.ConclusionsThe DL-based model successfully detected pathological ICP waveforms, thereby enhancing clinical decision-making. As DL advances, its significance in neurocritical care will help to pave the way for more individualized and data-driven approaches to brain monitoring and management.
- 10.21037/jmai-24-120
- Jun 1, 2025
- Journal of Medical Artificial Intelligence
15
- 10.1007/s00521-023-08781-w
- Jul 14, 2023
- Neural Computing and Applications
4
- 10.1371/journal.pone.0270916.r004
- Jul 1, 2022
- PLoS ONE
- 10.4266/acc.004080
- Feb 28, 2025
- Acute and critical care
3
- 10.3390/solar5010006
- Feb 21, 2025
- Solar
13
- 10.1109/embc46164.2021.9630274
- Nov 1, 2021
- Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
7
- 10.4266/acc.2021.01795
- Jun 23, 2022
- Acute and Critical Care
4
- 10.4266/acc.2023.00094
- Aug 1, 2023
- Acute and Critical Care
17
- 10.1016/s0899-5885(18)30079-0
- Dec 1, 2000
- Critical Care Nursing Clinics of North America
15
- 10.1371/journal.pone.0270916
- Jul 1, 2022
- PLOS ONE
- Conference Article
6
- 10.1109/iembs.2004.1403165
- Jan 1, 2004
Patients with increased intracranial pressure (ICP) caused by hydrocephalus or brain injury have poor brain compliance or increased brain stiffness. The condition is commonly treated by a surgical diversion of cerebrospinal fluid (CSF) through placement of a ventriculoperitoneal (VP) shunt. These inserted devices frequently fail and require replacement. Assessment of failed devices typically requires an invasive surgical procedure to implant an ICP sensor. Brain compliance can be determined non-invasively by comparing the intracranial pressure (ICP) waveform to the digital artery waveform. The ICP waveform is derived from a piezo sensor snugged into the external ear canal and worn as a headset. The digital artery waveform is derived from a stand pulse oximeter. Digital signal processing performed on sampled data from these two sensors shows a time-lag or phase relationship between the two waves which widens with worsening brain stiffness or compliance. An algorithm is presented that shows how these signals can be used to compute brain compliance. An instrument designed to calculate real-time brain compliance to aid healthcare professionals is described.
- Conference Article
1
- 10.1109/issmd.2004.1689554
- Sep 11, 2006
Patients with increased intracranial pressure (ICP) caused by hydrocephalus or brain injury have poor brain compliance or increased brain stiffness. The condition is commonly treated by a surgical diversion of cerebrospinal fluid (CSF) through placement of a ventriculoperitoneal (VP) shunt. These inserted devices frequently fail and require replacement. Assessment of failed devices typically requires an invasive surgical procedure to implant an ICP sensor. Brain compliance can be determined non-invasively by comparing the intracranial pressure (ICP) waveform to the digital artery waveform. The ICP waveform is derived from a piezo sensor snugged into the external ear canal and worn as a headset. The digital artery waveform is derived from a stand pulse oximeter. Digital signal processing performed on sampled data from these two sensors shows a time-lag or phase relationship between the two waves which widens with worsening brain stiffness or compliance. An algorithm is presented that shows how these signals can be used to compute brain compliance. An instrument designed to calculate real-time brain compliance to aid healthcare professionals is described.
- Research Article
20
- 10.1371/journal.pone.0265751
- Mar 24, 2022
- PLoS ONE
ObjectivesThe objective of this study was to develop and validate a state-of-the-art, deep learning (DL)-based model for detecting breast cancers on mammography.MethodsMammograms in a hospital development dataset, a hospital test dataset, and a clinic test dataset were retrospectively collected from January 2006 through December 2017 in Osaka City University Hospital and Medcity21 Clinic. The hospital development dataset and a publicly available digital database for screening mammography (DDSM) dataset were used to train and to validate the RetinaNet, one type of DL-based model, with five-fold cross-validation. The model’s sensitivity and mean false positive indications per image (mFPI) and partial area under the curve (AUC) with 1.0 mFPI for both test datasets were externally assessed with the test datasets.ResultsThe hospital development dataset, hospital test dataset, clinic test dataset, and DDSM development dataset included a total of 3179 images (1448 malignant images), 491 images (225 malignant images), 2821 images (37 malignant images), and 1457 malignant images, respectively. The proposed model detected all cancers with a 0.45–0.47 mFPI and had partial AUCs of 0.93 in both test datasets.ConclusionsThe DL-based model developed for this study was able to detect all breast cancers with a very low mFPI. Our DL-based model achieved the highest performance to date, which might lead to improved diagnosis for breast cancer.
- Research Article
91
- 10.1016/j.cmpb.2022.106903
- May 23, 2022
- Computer Methods and Programs in Biomedicine
YOLO-LOGO: A transformer-based YOLO segmentation model for breast mass detection and segmentation in digital mammograms
- Research Article
4
- 10.1080/02688697.2021.1947971
- Jul 9, 2021
- British Journal of Neurosurgery
Objectives To explore the prognostic factors of patients with low-grade optic pathway glioma (OPG) and the optimal treatment to reduce the incidence of postoperative hydrocephalus. Patients and methods This single-center study retrospectively analyzed data from 66 patients with OPGs who underwent surgery. The patients were followed, and overall survival (OS) and progression-free survival (PFS) were determined. The effects of different treatments on the hydrocephalus of patients were compared. Results Postoperative hydrocephalus was identified as a factor to increase the risk of mortality by 1.99-fold (p = .028). And, 5-year survival rate was significantly lower among patients with postoperative hydrocephalus (p = .027). The main factors leading to preoperative hydrocephalus in patients are large tumor volume and invasion into the third ventricle. Gross total resections (GTR) could reduce the risk of long-term hydrocephalus (p = .046). Age younger than 4 years (p = .046) and tumor invasion range/classification (p = .029) are the main factors to reduce the five-year survival rate. Postoperative radiotherapy (RT) and chemotherapy (CT) had no significant effects on OS. Extraventricular drainage (EVD) was not associated with perioperative infection (p = .798 > .05) and bleeding (p = .09 > .05). Compared with 2 stage surgery (external ventricular drainage or ventriculoperitoneal shunt (VPS) was first placed, followed by tumor resection), 1 stage surgery (direct resection of tumor) had no complication increase. Conclusions Postoperative hydrocephalus is mostly obstructive hydrocephalus, and it is an important factor that reduces the OS of patients with low-grade OPGs. Surgery to remove the tumor to the greatest extent improves cerebrospinal fluid circulation is effective at reducing the incidence postoperative hydrocephalus. For patients whose ventricles are still dilated after surgery, in addition to considering poor ventricular compliance, they need to be aware of the persistence and progression of hydrocephalus.
- Research Article
- 10.57197/jdr-2025-0642
- Jan 1, 2025
- Journal of Disability Research
Advances in deep learning and computer vision have revolutionized object detection, enabling real-time and accurate object recognition. These object detection technologies can potentially transform accessibility solutions, especially for individuals with visual impairments. This study aims to enhance accessibility and environment-effective interaction for individuals with visual disabilities by detecting and naming objects in real-world environments. This study examines and optimizes the potential of a set of developed deep learning models, including YOLOv8L, YOLO11x, and Faster region-based convolutional neural network (R-CNN) with seven backbone models for multi-class object detection to enhance object recognition and provide auditory feedback; these models aim to bridge the gap between the visually impaired and their surroundings. In addition, we attempt to propose a system that translates detections into audible descriptions, empowering individuals to navigate and interact with the world independently by integrating object detection with text-to-speech (TTS) technology. The models leverage Arabic-translated PASCAL VOC 2007 and 2012 datasets, with performance evaluated through precision, recall, and mean average precision (mAP). The results revealed that YOLO11x achieves the highest mAP of 0.86, followed by YOLOv8L with an mAP of 0.83. Faster R-CNN with EfficientNet-B3, HRNet-w32, and MobileNetV3-Large showed the highest accuracy among other backbones with 79%, 78%, and 75%, respectively. The study proves the efficacy of deep learning models in accessibility applications as assistive technologies for individuals with visual impairments and highlights opportunities for future development.
- Research Article
2
- 10.1007/s00381-023-05922-3
- Mar 28, 2023
- Child's Nervous System
Ventriculoperitoneal (VP) shunt is the primary therapy for hydrocephalus in children; however, this technique is amenable to malfunctions, which could be detected through an assessment of clinical signs and imaging results. Furthermore, early detection can prevent patient deterioration and guide clinical and surgical treatment. A 5-year-old female with a premedical history of neonatal IVH, secondary hydrocephalus, multiple VP shunts revisions, and slit ventricle syndrome was evaluated using a noninvasive intracranial pressure monitor device at the early stages of the clinical symptoms, evidencing increased intracranial pressure and poor brain compliance. Serial MRI images demonstrated a slight ventricular enlargement, leading to the use of a gravitational VP shunt, promoting progressive improvement. On the follow-up visits, we used the noninvasive ICP monitoring device to guide the shunt adjustments until symptom resolution. Furthermore, the patient has been asymptomatic for the past 3years without requiring new shunt revisions. Slit ventricle syndrome and VP shunt dysfunctions are challenging diagnoses for the neurosurgeon. The noninvasive intracranial monitoring has allowed a closer follow-up assisting early assessment of brain compliance changes related to a patient's symptomatology. Furthermore, this technique has high sensitivity and specificity in detecting alterations in the intracranial pressure, serving as a guide for the adjustments of programmable VP shunts, which may improve the patient's quality of life. Noninvasive ICP monitoring may lead to a less invasive assessment of patients with slit ventricle syndrome and could be used as a guide for adjustments of programmable shunts.
- Research Article
- 10.2174/0118750362370483250314042749
- Apr 3, 2025
- The Open Bioinformatics Journal
Background The Kirby-Bauer disk diffusion method is a cost-effective and widely used technique for determining antimicrobial susceptibility, suitable for diverse laboratory settings. It involves placing antibiotic disks on a Mueller-Hinton agar plate inoculated with standardized bacteria, leading to inhibition zones after incubation. These zones are manually measured and compared to the Clinical and Laboratory Standards Institute (CLSI) criteria to classify bacteria. However, manual interpretation can introduce variability due to human error, operator skill, and environmental factors, especially in resource-limited settings. Advances in AI and deep learning now enable automation, reducing errors and enhancing consistency in antimicrobial resistance management. Objective This study evaluated two deep learning models—Faster R-CNN (ResNet-50 and ResNet-101 backbones) and RetinaNet (ResNet-50 backbone)—for detecting antibiotic disks, inhibition zones, and antibiotic abbreviations on Kirby-Bauer test images. The aim was to automate interpretation and improve clinical decision-making. Methods A dataset of 291 Kirby-Bauer test images was annotated for agar plates, antibiotic disks, and inhibition zones. Images were split into training (80%) and evaluation (20%) sets and processed using Azure Machine Learning. Model performance was assessed using mean Average Precision (mAP), precision, recall, and inference time. Automated zone measurements were compared with manual readings using CLSI standards. Results Faster R-CNN with ResNet-101 achieved the highest mAP (0.962) and recall (0.972), excelling in detecting small zones. ResNet-50 offered balanced performance with lower computational demands. RetinaNet, though efficient, showed recall variability at higher thresholds. Automated measurements correlated strongly with manual readings, achieving 99% accuracy for susceptibility classification. Conclusion Faster R-CNN with ResNet-101 excels in accuracy-critical applications, while RetinaNet offers efficient, real-time alternatives. These findings demonstrate the potential of AI-driven automation to improve antibiotic susceptibility testing in clinical microbiology.
- Research Article
15
- 10.1200/po.20.00176
- Nov 1, 2021
- JCO Precision Oncology
The molecular subtype of breast cancer is an important component of establishing the appropriate treatment strategy. In clinical practice, molecular subtypes are determined by receptor expressions. In this study, we developed a model using deep learning to determine receptor expressions from mammograms. A developing data set and a test data set were generated from mammograms from the affected side of patients who were pathologically diagnosed with breast cancer from January 2006 through December 2016 and from January 2017 through December 2017, respectively. The developing data sets were used to train and validate the DL-based model with five-fold cross-validation for classifying expression of estrogen receptor (ER), progesterone receptor (PgR), and human epidermal growth factor receptor 2-neu (HER2). The area under the curves (AUCs) for each receptor were evaluated with the independent test data set. The developing data set and the test data set included 1,448 images (997 ER-positive and 386 ER-negative, 641 PgR-positive and 695 PgR-negative, and 220 HER2-enriched and 1,109 non-HER2-enriched) and 225 images (176 ER-positive and 40 ER-negative, 101 PgR-positive and 117 PgR-negative, and 53 HER2-enriched and 165 non-HER2-enriched), respectively. The AUC of ER-positive or -negative in the test data set was 0.67 (0.58-0.76), the AUC of PgR-positive or -negative was 0.61 (0.53-0.68), and the AUC of HER2-enriched or non-HER2-enriched was 0.75 (0.68-0.82). The DL-based model effectively classified the receptor expressions from the mammograms. Applying the DL-based model to predict breast cancer classification with a noninvasive approach would have additive value to patients.
- Research Article
- 10.9734/jerr/2024/v26i91263
- Aug 30, 2024
- Journal of Engineering Research and Reports
The YOLOv5 algorithm is widely used in object detection due to its efficient inference speed and high accuracy. However, it still faces challenges in small object detection. This paper proposes a series of improvements, including the addition of small object detection layers, the integration of the CBAM attention mechanism, and the optimization of the loss function by introducing EIoU, to enhance the model's feature extraction capability and detection accuracy. First, the paper enhances the network's perception of small objects by adding pyramid low-level semantic layers and constructing new small object detection heads. Second, the CBAM module is integrated into the C3 module, improving the model's feature representation ability and effectively preventing information loss. Finally, by introducing the EIoU loss function, the quality contribution of anchor boxes is enhanced, improving the model's detection accuracy and regression speed. Experimental results show that the improved YOLOv5 algorithm performs excellently on the BDD100K dataset, especially in small object detection. Compared with the original algorithm, it shows improvements in detection accuracy, recall rate, and mean average precision (mAP), despite the slight increase in parameters and computation, it still meets real-time requirements. This research provides strong support for further enhancing small object detection in autonomous driving scenarios.
- Research Article
2
- 10.1016/j.pmip.2024.100125
- Apr 5, 2024
- Personalized Medicine in Psychiatry
The use of machine learning and deep learning models in detecting depression on social media: A systematic literature review
- Research Article
- 10.3171/2025.4.focus24940
- Jul 1, 2025
- Neurosurgical focus
Endoscopic endonasal transsphenoidal surgery (EETS) is a minimally invasive procedure that accesses the sellar and parasellar regions. Various anatomical structures must be identified during the operation, particularly the sella turcica and internal carotid artery (ICA) bilaterally. In the present retrospective cohort study, authors aimed to evaluate the performance of a deep learning (DL) model in detecting the sella turcica and ICA bilaterally in EETS video footage, with the goal of recognizing crucial landmarks and preventing potentially fatal injury. The authors collected images from the endoscopic video footage of 98 patients who had undergone EETS from January 2015 to June 2024. The ICAs and sella turcica were labeled by neurosurgeons, and the entire dataset was divided into training, validation, and test datasets at a ratio of 7:2:1. The model for ICA and sella turcica detection was trained using the YOLOv5s object detection architecture, and precision, recall, mean average precision (mAP)@0.5, and mAP@0.5:0.95 were reported during the validation process. Moreover, the confusion matrix and area under the receiver operating characteristic curve (AUC) were assessed from the model using unseen images from the test dataset. The DL model had precision, recall, mAP@0.5, and mAP@0.5:0.95 of 0.942, 0.955, 0.969, and 0.617, respectively, for all objects in the training processes with validation. For testing the model with unseen images, the AUC was 0.97 (95% CI 0.95-0.98), whereas average precision was 0.99 (95% CI 0.99-1.00). For ICA detection with a multiclass approach, the AUCs were 0.98 (95% CI 0.97-0.99) for the absence of any ICA, 0.93 (95% CI 0.91-0.95) for 1 ICA in the images, and 0.95 (95% CI 0.93-0.96) for both ICAs in the image. Additionally, accuracy for the ICA and sella turcica was 0.958 and 0.965, respectively. Complex anatomical landmarks should be recognized during EETS. The computer vision model was effective in detecting the sella turcica and ICA bilaterally, as well as in identifying and avoiding fatal complications. For the model to generalize with reliability, it requires novel, unseen data from various settings to refine it and facilitate transfer learning.
- Research Article
103
- 10.1038/s41598-021-04667-w
- Jan 14, 2022
- Scientific reports
We developed and validated a deep learning (DL)-based model using the segmentation method and assessed its ability to detect lung cancer on chest radiographs. Chest radiographs for use as a training dataset and a test dataset were collected separately from January 2006 to June 2018 at our hospital. The training dataset was used to train and validate the DL-based model with five-fold cross-validation. The model sensitivity and mean false positive indications per image (mFPI) were assessed with the independent test dataset. The training dataset included 629 radiographs with 652 nodules/masses and the test dataset included 151 radiographs with 159 nodules/masses. The DL-based model had a sensitivity of 0.73 with 0.13 mFPI in the test dataset. Sensitivity was lower in lung cancers that overlapped with blind spots such as pulmonary apices, pulmonary hila, chest wall, heart, and sub-diaphragmatic space (0.50–0.64) compared with those in non-overlapped locations (0.87). The dice coefficient for the 159 malignant lesions was on average 0.52. The DL-based model was able to detect lung cancers on chest radiographs, with low mFPI.
- Research Article
- 10.21037/qims-24-2079
- Sep 22, 2025
- Quantitative Imaging in Medicine and Surgery
BackgroundThe automated breast volume scanner (ABVS), a type of ultrasound device, plays a crucial role in breast cancer screening; however, the ABVS data volume places a strain on clinicians. We aimed to develop an artificial intelligence (AI) model for the detection and classification of lesions as benign or malignant during ABVS examination.MethodsThis retrospective study included 1,284 patients with 1,769 lesions who underwent ABVS examination between January 2017 and August 2021. The lesions were randomly divided into training and test sets at a 7:3 ratio. Using the test set, the performance of the You Only Look Once (YOLO) AI model, based on the YOLO version 8 architecture, in single-target (background vs. lesion), categorical (benign vs. malignant), and varied lesion diameter detection was evaluated. Finally, differences in the diagnoses of four radiologists with different levels of experience before and after receiving AI model assistance were assessed.ResultsThe recall of the YOLO AI model for single-target detection was 0.983. The precision, recall, mean average precision (mAP) 50, and F1-score of the YOLO AI model for categorized target detection were 0.887, 0.866, 0.919, and 0.876, respectively. While the precision, recall, mAP50, and F1-score of the YOLO AI model for the classification of lesions with diameters ≤10 mm, 10 mm < diameters ≤ 20 mm, 20 mm < diameters ≤ 30 mm, and diameters >30 mm were 0.910, 0.806, 0.868, 0.855; 0.895, 0.844, 0.911, 0.869; 0.876, 0.867, 0.917, 0.871; and 0.882, 0.898, 0.941, 0.890, respectively. The area under the curve (AUC) values of the radiologists after they received YOLO AI assistance in the diagnosis of breast lesions were 0.806, 0.890, 0.897, and 0.895, respectively, and these AUC values were better than their AUC values before they received YOLO AI assistance (P<0.001).ConclusionsThe YOLO AI model can effectively identify and characterize breast lesions. It improves radiologists’ diagnostic performance and bridges expertise gaps between radiologists.
- Research Article
37
- 10.3390/agriculture12020248
- Feb 9, 2022
- Agriculture
The conventional method for crop insect detection based on visual judgment of the field is time-consuming, laborious, subjective, and error prone. The early detection and accurate localization of agricultural insect pests can significantly improve the effectiveness of pest control as well as reduce the costs, which has become an urgent demand for crop production. Maize Spodoptera frugiperda is a migratory agricultural pest that has severely decreased the yield of maize, rice, and other kinds of crops worldwide. To monitor the occurrences of maize Spodoptera frugiperda in a timely manner, an end-to-end Spodoptera frugiperda detection model termed the Pest Region-CNN (Pest R-CNN) was proposed based on the Faster Region-CNN (Faster R-CNN) model. Pest R-CNN was carried out according to the feeding traces of maize leaves by Spodoptera frugiperda. The proposed model was trained and validated using high-spatial-resolution red–green–blue (RGB) ortho-images acquired by an unmanned aerial vehicle (UAV). On the basis of the severity of feeding, the degree of Spodoptera frugiperda invasion severity was classified into the four classes of juvenile, minor, moderate, and severe. The degree of severity and specific feed location of S. frugiperda infestation can be determined and depicted in the frame forms using the proposed model. A mean average precision (mAP) of 43.6% was achieved by the proposed model on the test dataset, showing the great potential of deep learning object detection in pest monitoring. Compared with the Faster R-CNN and YOLOv5 model, the detection accuracy of the proposed model increased by 12% and 19%, respectively. Further ablation studies showed the effectives of channel and spatial attention, group convolution, deformable convolution, and the multi-scale aggregation strategy in the aspect of improving the accuracy of detection. The design methods of the object detection architecture could provide reference for other research. This is the first step in applying deep-learning object detection to S. frugiperda feeding trace, enabling the application of high-spatial-resolution RGB images obtained by UAVs to S. frugiperda-infested object detection. The proposed model will be beneficial with respect to S. frugiperda pest stress monitoring to realize precision pest control.
- Research Article
- 10.4266/acc.003336
- Aug 1, 2025
- Acute and Critical Care
- Research Article
- 10.4266/acc.001425
- Aug 1, 2025
- Acute and Critical Care
- Research Article
- 10.4266/acc.000575
- Aug 1, 2025
- Acute and Critical Care
- Research Article
- 10.4266/acc.001125
- Aug 1, 2025
- Acute and critical care
- Research Article
- 10.4266/acc.000975
- Aug 1, 2025
- Acute and Critical Care
- Front Matter
- 10.4266/acc.003100
- Aug 1, 2025
- Acute and Critical Care
- Research Article
- 10.4266/acc.004200
- Aug 1, 2025
- Acute and critical care
- Research Article
- 10.4266/acc.000500
- Aug 1, 2025
- Acute and Critical Care
- Supplementary Content
- 10.4266/acc.001450
- Aug 1, 2025
- Acute and Critical Care
- Research Article
- 10.4266/acc.001050
- Aug 1, 2025
- Acute and Critical Care
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.