Abstract

Recently, multimodal representation learning for images and other information such as numbers or language has gained much attention. The aim of the current study was to analyze the diagnostic performance of deep multimodal representation model-based integration of tumor image, patient background, and blood biomarkers for the differentiation of liver tumors observed using B-mode ultrasonography (US). First, we applied supervised learning with a convolutional neural network (CNN) to 972 liver nodules in the training and development sets to develop a predictive model using segmented B-mode tumor images. Additionally, we also applied a deep multimodal representation model to integrate information about patient background or blood biomarkers to B-mode images. We then investigated the performance of the models in an independent test set of 108 liver nodules. Using only the segmented B-mode images, the diagnostic accuracy and area under the curve (AUC) values were 68.52% and 0.721, respectively. As the information about patient background and blood biomarkers was integrated, the diagnostic performance increased in a stepwise manner. The diagnostic accuracy and AUC value of the multimodal DL model (which integrated B-mode tumor image, patient age, sex, aspartate aminotransferase, alanine aminotransferase, platelet count, and albumin data) reached 96.30% and 0.994, respectively. Integration of patient background and blood biomarkers in addition to US image using multimodal representation learning outperformed the CNN model using US images. We expect that the deep multimodal representation model could be a feasible and acceptable tool for the definitive diagnosis of liver tumors using B-mode US.

Highlights

  • Ultrasonography (US) is widely used for hepatocellular carcinoma (HCC) surveillance to screen high-risk populations, because of its cost-effectiveness and non-invasiveness

  • Since B-mode US provides structural information that may reflect the histological characteristics of the tumor,[2] a precise and objective recognition of B-mode images has the potential to become a powerful tool for the qualitative diagnosis of liver tumors

  • As B-mode US itself provides structural information, an objective recognition of B-mode images using the Machine learning (ML) approach has the potential to become a powerful tool for the qualitative diagnosis of liver tumors

Read more

Summary

Introduction

Ultrasonography (US) is widely used for hepatocellular carcinoma (HCC) surveillance to screen high-risk populations, because of its cost-effectiveness and non-invasiveness. A definitive diagnosis of liver tumors observed using B-mode sonography can be difficult because of the low specificity of this modality.[1] Currently, B-mode sonography is usually used in combination with other contrast imaging modalities such as computed tomography (CT) or magnetic resonance imaging (MRI), to obtain a definitive diagnosis. Since B-mode US provides structural information that may reflect the histological characteristics of the tumor,[2] a precise and objective recognition of B-mode images has the potential to become a powerful tool for the qualitative diagnosis of liver tumors. The ImageNet Large Scale Visual Recognition Challenge competition is an annual competition for computer vision; in the competition held in 2017, DL technology with deep convolutional neural network (CNN)

Objectives
Methods
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.