Abstract

Abstract Integrating multimodal lung data including clinical notes, medical images, and molecular data is critical for predictive modeling tasks like survival prediction, yet effectively aligning these disparate data types remains challenging. We present a novel method to integrate heterogeneous lung modalities by first thoroughly analyzing various domain-specific models and selecting the optimal model for embedding feature extraction per data type based on performance on representative pretrained tasks. For clinical notes, the GatorTron models showed the lowest regression loss on an initial evaluation set, with the large GatorTron-medium model achieving 12.9 loss. After selecting the top performers, we extracted robust embeddings on the full lung dataset built using the Multimodal Integration of Oncology Data System (MINDS) framework. MINDS provides an end-to-end platform for aggregating and normalizing multimodal patient data. We aligned the multimodal embeddings to a central pre-trained language model using contrastive representation learning based on a cosine similarity loss function. To adapt the language model to the new modalities, we employed a parameter-efficient tuning method called adapter tuning, which introduces small trainable adapter layers that leave the base model weights frozen. This avoids catastrophic forgetting of the pretrained weights. We evaluated our multimodal model on prognostic prediction tasks including survival regression and subtype classification using both public and internal lung cancer datasets spanning multiple histologic subtypes and stages. Our aligned multimodal model demonstrated improved performance over models utilizing only single modalities, highlighting the benefits of integrating complementary information across diverse lung data types. This work illustrates the potential of flexible multimodal modeling for critical lung cancer prediction problems using heterogeneous real-world patient data. Our model provides a strong foundation for incorporating emerging data types, modalities, and predictive tasks in the future. Citation Format: Aakash Tripathi, Asim Waqas, Yasin Yilmaz, Ghulam Rasool. Multimodal transformer model improves survival prediction in lung cancer compared to unimodal approaches [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2024; Part 1 (Regular Abstracts); 2024 Apr 5-10; San Diego, CA. Philadelphia (PA): AACR; Cancer Res 2024;84(6_Suppl):Abstract nr 4905.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call