Abstract

Abstract Background: Survival analysis of the patient has an important role in the cancer treatment process. Traditional models based on clinical information, signs, and symptoms to predict the outcome of the patient. Radiomics is another approach based on image features and machine learning methods. Both approaches need hand-craft features and expertise of medical doctors which time-consuming task. Recently, deep learning (DL) has more and more success in many medicine tasks. In this report, we investigate the DL model architecture to non-small cell lung cancer (NSCLC) survival analysis based on a CT scan in the end-to-end strategy. As much medical imaging technique, a CT scan includes three-dimension. 3D CT images are used to large and require much computing, space, and time-consuming. Most studies scale down the CT image size by linear methods, e.g. 128 × 128 × 128 pixels, to fit the available computing resource. However, down-scaling images to low-resolution using linear methods lose information, then affect the final output. We deal with this problem by proposed a non-linear scale down DL network architecture CT image to fit the resource problem while trying to keep the most important information in the original image. Material and method: The CT images (n=1861) from our hospital about non-small cell lung cancer (NSCLC) were collected between 2013 and 2020. All patients are independent then we the dataset into two separate subsets including training and testing with the number of samples are 1516 and 345, respective. Training subsets were used to build a model and testing was leaving for test purposes. Most studies scale down the CT image to low-resolution by linear approach before applying 3DCNN to prediction. We assume that in each slice, there are some regions that contain more information than others. Then, we apply saliency sampling approach to sampling the original CT image to low-resolution. The network architecture has two blocks including saliency sampling block (SS) and classification block, named Saliency Sampling per Slice (SSPS). Firstly, a localization part detects the important regions in a slice and return the map that scores each pixel. After that, a grid is built and used to sample the original image to a low-resolution image. This way, the most important part of the image are less scale, and less important get more scale. Results: For individual survival time prediction, we compare only non-censoring data. Our SSPS approach gets 347.1 days in mean absolute error (MAE). Our results are far better than the naive predictions based on the mean value (389.2 days). Compare to the base-line architecture of the 3DCNN model, SSPS with SS block improve by reducing 43.5 days from 391.6 days of 3DCNN without SS part. Conclusions: Our results suggest that the saliency sampling approach has an effect when dealing with the big size image like CT. The network with SS block likely learns to focus on some important regions of image instead of linear scale-down image. Citation Format: Hung Thanh Vo, Sae-Ryung Kang, In-Jae Oh, Soo-Hyung Kim. Improving lung cancer survival analysis from CT images by saliency sampling [abstract]. In: Proceedings of the AACR Virtual Special Conference on Artificial Intelligence, Diagnosis, and Imaging; 2021 Jan 13-14. Philadelphia (PA): AACR; Clin Cancer Res 2021;27(5_Suppl):Abstract nr PO-038.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call