CNN-based prediction using early post-radiotherapy MRI as a proxy for toxicity in the murine head and neck

  • Abstract
  • References
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Background and purposeRadiotherapy (RT) of head and neck cancer can cause severe toxicities. Early identification of individuals at risk could enable personalized treatment. This study evaluated whether convolutional neural networks (CNNs) applied to Magnetic Resonance (MR) images acquired early after irradiation can predict radiation-induced tissue changes associated with toxicity in mice.Patient/material and methodsTwenty-nine C57BL/6JRj mice were included (irradiated: n = 14; control: n = 15). Irradiated mice received 65 Gy of fractionated RT to the oral cavity, swallowing muscles and salivary glands. T2-weighted MR images were acquired 3–5 days post-irradiation. CNN models (VGG, MobileNet, ResNet, EfficientNet) were trained to classify sagittal slices as irradiated or control (n = 586 slices). Predicted class probabilities were correlated with five toxicity endpoints assessed 8–105 days post-irradiation. Model explainability was assessed with VarGrad heatmaps, to verify that predictions relied on clinically relevant image regions.ResultsThe best-performing model (EfficientNet B3) achieved 83% slice-level accuracy (ACC) and correctly classified 28 of 29 mice. Higher predicted probabilities of the irradiated class were strongly associated with oral mucositis, dermatitis, reduced saliva production, late submandibular gland fibrosis and atrophy of salivary gland acinar cells. Explainability heatmaps confirmed that CNNs focused on irradiated regions.InterpretationThe high CNN classification ACC, the regions highlighted by the explainability analysis and the strong correlations between model predictions and toxicity suggest that CNNs, together with post-irradiation magnetic resonance imaging, may identify individuals at risk of developing toxicity.

ReferencesShowing 10 of 43 papers
  • Open Access Icon
  • Cite Count Icon 163
  • 10.1016/j.csda.2020.107043
A new correlation coefficient between categorical, ordinal and interval variables with Pearson characteristics
  • Jun 27, 2020
  • Computational Statistics & Data Analysis
  • M Baak + 3 more

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 90
  • 10.1186/s13014-019-1339-4
Predicting acute radiation induced xerostomia in head and neck Cancer using MR and CT Radiomics of parotid and submandibular glands
  • Jul 29, 2019
  • Radiation Oncology (London, England)
  • Khadija Sheikh + 8 more

  • Open Access Icon
  • Cite Count Icon 24
  • 10.1155/2021/5566508
Investigation of Radiation-Induced Toxicity in Head and Neck Cancer Patients through Radiomics and Machine Learning: A Systematic Review.
  • Jun 9, 2021
  • Journal of oncology
  • Roberta Carbonara + 9 more

  • Open Access Icon
  • Cite Count Icon 1976
  • 10.1002/widm.1249
Ensemble learning: A survey
  • Feb 27, 2018
  • WIREs Data Mining and Knowledge Discovery
  • Omer Sagi + 1 more

  • Cite Count Icon 61
  • 10.1016/j.artmed.2017.03.004
Early prediction of radiotherapy-induced parotid shrinkage and toxicity based on CT radiomics and fuzzy classification.
  • Mar 18, 2017
  • Artificial Intelligence in Medicine
  • Marco Pota + 6 more

  • Cite Count Icon 906
  • 10.1016/0031-3203(94)e0043-k
Image thresholding by minimizing the measures of fuzziness
  • Jan 1, 1995
  • Pattern Recognition
  • Liang-Kai Huang + 1 more

  • Open Access Icon
  • Cite Count Icon 91
  • 10.1016/j.compbiomed.2024.108238
Segment anything model for medical image segmentation: Current applications and future directions
  • Feb 27, 2024
  • Computers in biology and medicine
  • Yichi Zhang + 2 more

  • Open Access Icon
  • Cite Count Icon 309
  • 10.1148/ryai.2020190043
On the Interpretability of Artificial Intelligence in Radiology: Challenges and Opportunities.
  • May 1, 2020
  • Radiology: Artificial Intelligence
  • Mauricio Reyes + 7 more

  • Cite Count Icon 651
  • 10.1002/9780470479216.corpsy0524
Mann‐Whitney U Test
  • Jan 30, 2010
  • Patrick E Mcknight + 1 more

  • Cite Count Icon 4266
  • 10.1038/nrclinonc.2017.141
Radiomics: the bridge between medical imaging and personalized medicine.
  • Oct 4, 2017
  • Nature Reviews Clinical Oncology
  • Philippe Lambin + 19 more

Similar Papers
  • Front Matter
  • Cite Count Icon 1
  • 10.1016/j.gie.2020.12.008
Artificial intelligence: finding the intersection of predictive modeling and clinical utility
  • Mar 7, 2021
  • Gastrointestinal Endoscopy
  • Karthik Ravi

Artificial intelligence: finding the intersection of predictive modeling and clinical utility

  • Research Article
  • Cite Count Icon 80
  • 10.1016/j.jrmge.2021.09.004
Tunnel boring machine vibration-based deep learning for the ground identification of working faces
  • Dec 1, 2021
  • Journal of Rock Mechanics and Geotechnical Engineering
  • Mengbo Liu + 5 more

Tunnel boring machine vibration-based deep learning for the ground identification of working faces

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 2
  • 10.4108/eetpht.10.5183
Prediction of Diabetic Retinopathy using Deep Learning with Preprocessing
  • Feb 22, 2024
  • EAI Endorsed Transactions on Pervasive Health and Technology
  • S Balaji + 2 more

INTRODUCTION: When Diabetic Retinopathy (DR) is not identified promptly; it frequently results in sight impairment. To properly diagnose and treat DR, preprocessing of picture methods and precise prediction models are essential. With the help of numerous well-liked filters and a Deep CNN (Convolutional Neural Network) model, the comprehensive method for DR image preparation and prognosis presented in this research is described. Using the filters that focus boundaries and contours in the ocular pictures is the first step in the initial processing stage. This procedure tries to find anomalies linked to DR. By the usage of filters, the excellence of pictures can be developed and minimize disturbances, preserving critical information. The Deep CNN algorithm has been trained to generate forecasts on the cleaned retinal pictures following the phase of preprocessing. The filters efficiently eliminate interference without sacrificing vital data. Convolutional type layers, pooling type layers, and fully associated layers are used in the CNN framework, which was created especially for image categorization tasks, to acquire data and understand the relationships associated with DR. OBJECTIVES: Using image preprocessing techniques such as the Sobel, Wiener, Gaussian, and non-local mean filters is a promising approach for DR analysis. Then, predicting using a CNN completes the approach. These preprocessing filters enhance the images and prepare them for further examination. The pre-processed images are fed into a CNN model. The model extracts significant information from the images by identifying complex patterns. DR or classification may be predicted by the CNN model through training on a labeled dataset. METHODS: The Method Preprocessing is employed for enhancing the clarity and difference of retina fundus picture by removing noise and fluctuation. The preprocessing stage is utilized for the normalization of the pictures and non-uniform brightness adjustment in addition to contrast augmentation and noise mitigation to remove noises and improve the rate of precision of the subsequent processing stages. RESULTS: To improve image quality and reduce noise, preprocessing techniques including Sobel, Wiener, Gaussian, and non-local mean filters are frequently employed in image processing jobs. For a particular task, the non-local mean filter produces superior results; for enhanced performance, it may be advantageous to combine it with a CNN. Before supplying the processed images to the CNN for prediction, the non-local mean filter can assist reduce noise and improve image details. CONCLUSION: A promising method for DR analysis entails the use of image preprocessing methods such as the Sobel, Wiener, Gaussian, and non-local mean filters, followed by prediction using a CNN. These preprocessing filters improve the photos and get them ready for analysis. After being pre-processed, the photos are sent into a CNN model, which uses its capacity to discover intricate patterns to draw out important elements from the images. The CNN model may predict DR or classification by training it on a labeled dataset. The development of computer-aided diagnosis systems for DR is facilitated by the integration of CNN prediction with image preprocessing filters. This strategy may increase the effectiveness of healthcare workers, boost patient outcomes, and lessen the burden of DR.

  • Abstract
  • 10.1016/j.cvdhj.2022.07.007
A CONVOLUTIONAL NEURAL NETWORK FOR AUTOMATIC DISCRIMINATION OF PAUSE EPISODES DETECTED BY AN INSERTABLE CARDIAC MONITOR
  • Aug 1, 2022
  • Cardiovascular Digital Health Journal
  • Elnaz Lashgari + 11 more

A CONVOLUTIONAL NEURAL NETWORK FOR AUTOMATIC DISCRIMINATION OF PAUSE EPISODES DETECTED BY AN INSERTABLE CARDIAC MONITOR

  • Research Article
  • Cite Count Icon 8
  • 10.1212/wnl.0000000000207411
Convolutional Neural Network Algorithm to Determine Lateralization of Seizure Onset in Patients With Epilepsy: A Proof-of-Principle Study.
  • May 18, 2023
  • Neurology
  • Erik Kaestner + 12 more

A new frontier in diagnostic radiology is the inclusion of machine-assisted support tools that facilitate the identification of subtle lesions often not visible to the human eye. Structural neuroimaging plays an essential role in the identification of lesions in patients with epilepsy, which often coincide with the seizure focus. In this study, we explored the potential for a convolutional neural network (CNN) to determine lateralization of seizure onset in patients with epilepsy using T1-weighted structural MRI scans as input. Using a dataset of 359 patients with temporal lobe epilepsy (TLE) from 7 surgical centers, we tested whether a CNN based on T1-weighted images could classify seizure laterality concordant with clinical team consensus. This CNN was compared with a randomized model (comparison with chance) and a hippocampal volume logistic regression (comparison with current clinically available measures). Furthermore, we leveraged a CNN feature visualization technique to identify regions used to classify patients. Across 100 runs, the CNN model was concordant with clinician lateralization on average 78% (SD = 5.1%) of runs with the best-performing model achieving 89% concordance. The CNN outperformed the randomized model (average concordance of 51.7%) on 100% of runs with an average improvement of 26.2% and outperformed the hippocampal volume model (average concordance of 71.7%) on 85% of runs with an average improvement of 6.25%. Feature visualization maps revealed that in addition to the medial temporal lobe, regions in the lateral temporal lobe, cingulate, and precentral gyrus aided in classification. These extratemporal lobe features underscore the importance of whole-brain models to highlight areas worthy of clinician scrutiny during temporal lobe epilepsy lateralization. This proof-of-concept study illustrates that a CNN applied to structural MRI data can visually aid clinician-led localization of epileptogenic zone and identify extrahippocampal regions that may require additional radiologic attention. This study provides Class II evidence that in patients with drug-resistant unilateral temporal lobe epilepsy, a convolutional neural network algorithm derived from T1-weighted MRI can correctly classify seizure laterality.

  • Book Chapter
  • Cite Count Icon 2
  • 10.1007/978-3-030-72379-8_17
Evaluating a Comparing Deep Learning Architectures for Blood Glucose Prediction
  • Jan 1, 2021
  • Touria El Idrissi + 1 more

To manage their disease, diabetic patients need to control the blood glucose level (BGL) by monitoring it and predicting its future values. This allows to avoid high or low BGL by taking recommended actions in advance. In this paper, we conduct a comparative study of two emerging deep learning techniques: Long-Short-Term Memory (LSTM) and Convolutional Neural Networks (CNN) for one-step and multi-steps-ahead forecasting of the BGL based on Continuous Glucose Monitoring (CGM) data. The objectives are twofold: 1) Determining the best strategies of multi-steps-ahead forecasting (MSF) to fit the CNN and LSTM models respectively, and 2) Comparing the performances of the CNN and LSTM models for one-step and multi-steps prediction. Toward these objectives, we firstly conducted series of experiments of a CNN model through parameters selection to determine its best configuration. The LSTM model we used in the present study was developed and evaluated in an earlier work. Thereafter, five MSF strategies were developed and evaluated for the CNN and LSTM models using the Root-Mean-Square Error (RMSE) with an horizon of 30 min. To statistically assess the differences between the performances of CNN and LSTM models, we used the Wilcoxon statistical test. The results showed that: 1) no MSF strategy outperformed the others for both CNN and LSTM models, and 2) the proposed CNN model significantly outperformed the LSTM model for both one-step and multi-steps prediction.

  • Research Article
  • Cite Count Icon 6
  • 10.1016/j.ejrad.2022.110287
Diagnostic performance evaluation of adult Chiari malformation type I based on convolutional neural networks
  • Apr 2, 2022
  • European Journal of Radiology
  • Wei-Wei Lin + 6 more

Diagnostic performance evaluation of adult Chiari malformation type I based on convolutional neural networks

  • Research Article
  • Cite Count Icon 2
  • 10.1038/s41598-023-41603-6
Hyperspectral signature-band extraction and learning: an example of sugar content prediction of Syzygium samarangense
  • Sep 12, 2023
  • Scientific Reports
  • Yung-Jhe Yan + 5 more

This study proposes a method to extract the signature bands from the deep learning models of multispectral data converted from the hyperspectral data. The signature bands with two deep-learning models were further used to predict the sugar content of the Syzygium samarangense. Firstly, the hyperspectral data with the bandwidths lower than 2.5 nm were converted to the spectral data with multiple bandwidths higher than 2.5 nm to simulate the multispectral data. The convolution neural network (CNN) and the feedforward neural network (FNN) used these spectral data to predict the sugar content of the Syzygium samarangense and obtained the lowest mean absolute error (MAE) of 0.400° Brix and 0.408° Brix, respectively. Secondly, the absolute mean of the integrated gradient method was used to extract multiple signature bands from the CNN and FNN models for sugariness prediction. A total of thirty sets of six signature bands were selected from the CNN and FNN models, which were trained by using the spectral data with five bandwidths in the visible (VIS), visible to near-infrared (VISNIR), and visible to short-waved infrared (VISWIR) wavelengths ranging from 400 to 700 nm, 400 to 1000 nm, and 400 to 1700 nm. Lastly, these signature-band data were used to train the CNN and FNN models for sugar content prediction. The FNN model using VISWIR signature bands with a bandwidth of ± 12.5 nm had a minimum MAE of 0.390°Brix compared to the others. The CNN model using VISWIR signature bands with a bandwidth of ± 10 nm had the lowest MAE of 0.549° Brix compared to the other CNN models. The MAEs of the models with only six spectral bands were even better than those with tens or hundreds of spectral bands. These results reveal that six signature bands have the potential to be used in a small and compact multispectral device to predict the sugar content of the Syzygium samarangense.

  • Research Article
  • 10.15294/sji.v11i4.13529
Comparison of KNN and CNN Algorithms for Gender Classification Based on Eye Images
  • Dec 10, 2024
  • Scientific Journal of Informatics
  • Rizky Dwi Wicaksono + 1 more

Purpose: This study explores gender classification using iris images and compares two methods k-nearest neighbors (KNN) and convolutional neural networks (CNN). Most research has focused on facial recognition. However, iris classification is more unique and accurate. This research addresses a gap in gender classification using iris images. It also tests the effectiveness of CNN and KNN for this task. Methods: This study used 11,525 iris images from Kaggle. Of these, 6,323 were male and 5,202 were female. The authors split the data into training (75%) and testing (25%). Preprocessing involved normalizing and augmenting images by rotating, scaling, shifting, and reflecting the them. Pixel values were also adjusted. The study compared the KNN algorithm, using Euclidean distance and 16 neighbors, with a CNN model. The CNN had layers for convolution, pooling, and density. The authors performed evaluation using accuracy, precision, recall, F1-score, and confusion matrix. Result: The KNN model demonstrated 81% accuracy. It identified males with 87% precision but only 70% recall. Meanwhile, the CNN model was better, achieving 93% accuracy with 94% precision and 95% recall for males. The CNN model outperformed KNN for females in precision, recall, and F1-score, indicating its superior ability to learn patterns and classify gender from iris images. Novelty: CNN outperforms KNN in classifying gender from iris images. It effectively recognizes patterns and achieves high accuracy. The study shows CNN’s superiority in biometric tasks, suggesting that future research should balance datasets and test better models, as well as combining models for better performance.

  • Research Article
  • 10.3390/infrastructures10050125
An Investigation on Prediction of Infrastructure Asset Defect with CNN and ViT Algorithms
  • May 20, 2025
  • Infrastructures
  • Nam Lethanh + 2 more

Convolutional Neural Networks (CNNs) have been demonstrated to be one of the most powerful methods for image recognition, being applied in many fields, including civil and structural health monitoring in infrastructure asset management. Current State-of-the-Art CNN models are now accessible as open-source and available on several Artificial Intelligence (AI) platforms, with TensorFlow being widely used. Besides CNN models, Vision Transformers (ViTs) have recently emerged as a competitive alternative. Several demonstrations have indicated that ViT models, in many instances, outperform the current CNNs by almost four times in terms of computational efficiency and accuracy. This paper presents an investigation into defect detection for civil and structural components using CNN and ViT models available on TensorFlow. An empirical study was conducted using a database of cracks. The severity of crack is categorized into binary states: “with crack” and “without crack”. The results confirm that the accuracies of both CNN and ViT models exceed 95% after 100 epochs of training, with no significant difference observed between them for binary classification. Notably, the cost of this AI-based approach with images taken by lightweight and low-cost drones is considerably lower compared to high-speed inspection cars, while still delivering an expected level of predictive accuracy.

  • Conference Article
  • 10.1063/5.0117245
A performance evaluation of convolutional and recurrent neural network on Philippine typhoon data
  • Jan 1, 2023
  • Justin Raz + 2 more

In meteorology, neural networks have the potential to be useful for advancing forecasting and prediction capabilities, especially since some were designed to be useful for time series data like weather data. This study investigated the performance of the Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN). Both CNN and RNN are ideal for time series classification problems. The models were designed to look back at 5 days of weather data to predict the presence or category of a typhoon (No Typhoon, Tropical Depression, Tropical Storm, Severe Tropical Storm, Typhoon, and Super Typhoon). The models were fed with weather data (obtained from NASA and PAGASA) from four locations in the Philippines with the parameters: atmospheric pressure, humidity, precipitation, temperature and wind speed. The research investigated the Accuracy, Cross Entropy Error, Precision, Recall, and F1-Measure, validated using 12-fold Rolling Basis Cross Validation. The results reveal that the CNN and RNN model performed to varying extents. The CNN model scored better at average accuracy however, the RNN model performed better at average cross entropy error, precision, recall, and F1 measure. The RNN model achieved better scores for precision on most categories while the CNN model performed better at recall and F1 measure on other categories. Both performed better at precision, recall and F1 measure on No Typhoon compared to other categories. This is likely due to the historical data being mostly composed of days with no typhoons.

  • Research Article
  • Cite Count Icon 76
  • 10.1002/ctm2.102
Deep learning-based classification and mutation prediction from histopathological images of hepatocellular carcinoma.
  • Jun 1, 2020
  • Clinical and Translational Medicine
  • Haotian Liao + 12 more

Deep learning-based classification and mutation prediction from histopathological images of hepatocellular carcinoma.

  • Book Chapter
  • 10.1007/978-3-030-41114-5_11
Joint Task Offloading, CNN Layer Scheduling and Resource Allocation in Cooperative Computing System
  • Jan 1, 2020
  • Xia Song + 2 more

In this paper, we consider a cooperative computing system which consists of a number of mobile edge computing (MEC) servers deployed with convolutional neural network (CNN) model, a remote mobile cloud computing (MCC) server deployed with CNN model and a number of mobile devices (MDs). We assume that each MD has a computation task and is allowed to offload its task to one MEC server where the CNN model with various layers is applied to conduct task execution, and one MEC server can accept multiple tasks of MDs. To enable the cooperative between the MEC servers and the MCC server, we assume that the task of MD which has been processed partially by the CNN model of the MEC server will be sent to CNN model of the MCC server for further processing. We study the joint task offloading, CNN layer scheduling and resource allocation problem. By stressing the importance of task execution latency, the joint optimization problem is formulated as an overall task latency minimization problem. As the original optimization problem is NP hard, which cannot be solved conveniently, we transform it into three subproblems, i.e., CNN layer scheduling subproblem, task offloading subproblem and resource allocation subproblem, and solve the three subproblems by means of extensive search algorithm, reformulation-linearization-technique (RLT) and Lagrangian dual method, respectively. Numerical results demonstrate the effectiveness of the proposed algorithm.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 8
  • 10.1186/s12920-018-0416-0
Bi-stream CNN Down Syndrome screening model based on genotyping array
  • Nov 1, 2018
  • BMC Medical Genomics
  • Bing Feng + 9 more

BackgroundHuman Down syndrome (DS) is usually caused by genomic micro-duplications and dosage imbalances of human chromosome 21. It is associated with many genomic and phenotype abnormalities. Even though human DS occurs about 1 per 1,000 births worldwide, which is a very high rate, researchers haven’t found any effective method to cure DS. Currently, the most efficient ways of human DS prevention are screening and early detection.MethodsIn this study, we used deep learning techniques and analyzed a set of Illumina genotyping array data. We built a bi-stream convolutional neural networks model to screen/predict the occurrence of DS. Firstly, we built image input data by converting the intensities of each SNP site into chromosome SNP maps. Next, we proposed a bi-stream convolutional neural network (CNN) architecture with nine layers and two branch models. We further merged two CNN branch models into one model in the fourth convolutional layer, and output the prediction in the last layer.ResultsOur bi-stream CNN model achieved 99.3% average accuracies, and very low false-positive and false-negative rates, which was necessary for further applications in disease prediction and medical practice. We further visualized the feature maps and learned filters from intermediate convolutional layers, which showed the genomic patterns and correlated SNPs variations in human DS genomes. We also compared our methods with other CNN and traditional machine learning models. We further analyzed and discussed the characteristics and strengths of our bi-stream CNN model.ConclusionsOur bi-stream model used two branch CNN models to learn the local genome features and regional patterns among adjacent genes and SNP sites from two chromosomes simultaneously. It achieved the best performance in all evaluating metrics when compared with two single-stream CNN models and three traditional machine-learning algorithms. The visualized feature maps also provided opportunities to study the genomic markers and pathway components associated with Human DS, which provided insights for gene therapy and genomic medicine developments.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 11
  • 10.1155/2020/1475164
The Real-Time Mobile Application for Classifying of Endangered Parrot Species Using the CNN Models Based on Transfer Learning
  • Mar 9, 2020
  • Mobile Information Systems
  • Daegyu Choe + 2 more

Among the many deep learning methods, the convolutional neural network (CNN) model has an excellent performance in image recognition. Research on identifying and classifying image datasets using CNN is ongoing. Animal species recognition and classification with CNN is expected to be helpful for various applications. However, sophisticated feature recognition is essential to classify quasi-species with similar features, such as the quasi-species of parrots that have a high color similarity. The purpose of this study is to develop a vision-based mobile application to classify endangered parrot species using an advanced CNN model based on transfer learning (some parrots have quite similar colors and shapes). We acquired the images in two ways: collecting them directly from the Seoul Grand Park Zoo and crawling them using the Google search. Subsequently, we have built advanced CNN models with transfer learning and trained them using the data. Next, we converted one of the fully trained models into a file for execution on mobile devices and created the Android package files. The accuracy was measured for each of the eight CNN models. The overall accuracy for the camera of the mobile device was 94.125%. For certain species, the accuracy of recognition was 100%, with the required time of only 455 ms. Our approach helps to recognize the species in real time using the camera of the mobile device. Applications will be helpful for the prevention of smuggling of endangered species in the customs clearance area.

More from: Acta Oncologica
  • Research Article
  • 10.2340/1651-226x.2025.44211
Ultra-hypofractionated radiotherapy with focal boost for high-risk localized prostate cancer (HYPO-RT-PC-boost): in silico evaluation with histological reference
  • Oct 27, 2025
  • Acta Oncologica
  • Erik Nilsson + 16 more

  • Research Article
  • 10.2340/1651-226x.2025.44599
Integrating 2D dosimetry and cell survival analysis for predicting local effect in spatially fractionated radiotherapy
  • Oct 24, 2025
  • Acta Oncologica
  • Delmon Arous + 3 more

  • Research Article
  • 10.2340/1651-226x.2025.44007
Natural killer cell activity in prostate cancer patients treated with curative radiotherapy with or without androgen deprivation therapy: an observational study
  • Oct 19, 2025
  • Acta Oncologica
  • Stine V Eriksen + 6 more

  • Research Article
  • 10.2340/1651-226x.2025.44028
Low b-values in apparent diffusion coefficient calculations overestimate diffusion in rectal cancer
  • Oct 19, 2025
  • Acta Oncologica
  • Johanna A Hundvin + 7 more

  • Research Article
  • 10.2340/1651-226x.2025.44133
Perception of cure and quality of life in anal cancer survivors
  • Oct 15, 2025
  • Acta Oncologica
  • Nina Fanni + 6 more

  • Research Article
  • 10.2340/1651-226x.2025.44013
Real-world outcomes after concurrent chemo-radiotherapy in patients with locally advanced esophageal and gastroesophageal junction cancer
  • Oct 15, 2025
  • Acta Oncologica
  • Hanna Rahbek Mortensen + 4 more

  • Research Article
  • 10.2340/1651-226x.2025.43794
Radium-223 use and survival by line of treatment in metastatic castration-resistant prostate cancer: a nationwide population-based register study
  • Oct 13, 2025
  • Acta Oncologica
  • Charlotte Alverbratt + 5 more

  • Research Article
  • 10.2340/1651-226x.2025.43691
Impact of the COVID-19 pandemic on the quality of life of early breast cancer patients undergoing adjuvant chemotherapy – an observational, multicenter study
  • Oct 8, 2025
  • Acta Oncologica
  • Marie Tuomarila + 13 more

  • Research Article
  • 10.2340/1651-226x.2025.44533
Baseline laboratory values and metastatic burden predict survival in addition to IMDC risk in real-world renal cell carcinoma patients treated with ipilimumab-nivolumab
  • Oct 3, 2025
  • Acta Oncologica
  • Alaa Kheir + 12 more

  • Research Article
  • 10.2340/1651-226x.2025.43990
Real-world survival outcomes of neoadjuvant versus adjuvant chemotherapy in operable triple-negative breast cancer: a propensity score matched registry-based study
  • Oct 1, 2025
  • Acta Oncologica
  • Ali Inan El-Naggar + 2 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon