Cancer remains one of the leading causes of mortality worldwide, necessitating continuous advancements in early diagnosis and treatment. Deep learning, a subset of artificial intelligence, has emerged as a powerful tool in the field of medical image analysis, revolutionizing the way cancer is detected and diagnosed. The study discusses the various modalities employed in lung cancer diagnosis, such as medical imaging (e.g., radiology and pathology), genomics, and clinical data, highlighting the unique challenges associated with each domain. The proposed Multimodal Fusion Deep Neural Network (MFDNN) architecture design effectively integrates information from different modalities (e.g., medical imaging, genomics, clinical data) to enhance lung cancer diagnostic accuracy. Furthermore, it delves into the integration of clinical data, electronic health records, and multimodal approaches to improve the accuracy and reliability of lung cancer diagnosis. Moreover, we highlight the ethical considerations surrounding the deployment of Artificial Intelligence (AI) in clinical settings and the need for robust validation and regulatory guidelines. The Multimodal Fusion Deep Neural Network (MFDNN) achieves an exceptional accuracy rate of 92.5 %, marking a significant breakthrough in the realm of medical AI. MFDNN excels in precision, with 87.4 % accuracy in predicting cancer cases, and equally impresses in recall, capturing approximately 86.4 % of actual cancerous cases. The F1-score of 86.2 further exemplifies MFDNN's ability to strike a harmonious equilibrium, ensuring both diagnostic accuracy and minimized missed diagnoses. The performance is compared with established methods like CNN, DNN, and ResNet. The results underscore MFDNN's pivotal role in revolutionizing lung cancer diagnosis, promising more accurate and timely identification of this critical condition.
Read full abstract