This paper mainly discusses the deep learning solution for non-invasive evaluation of the differentiation degree of hepatocellular carcinoma based on multi-parameter nuclear magnetic resonance images, combined with the clinical diagnosis experience of radiologists and the characteristics of nuclear magnetic resonance images. The method of multimodal data fusion is studied based on multi-parameter nuclear magnetic resonance imaging data. Multi-channel three-dimensional convolution neural network and multi-scale depth residual network are proposed to extract the features of three-dimensional medical image data and two-dimensional fusion medical image data, and to solve the problem of insufficient cases in clinical image data of hepatocellular carcinoma (HCC). We examine the role of transition learning and metric learning in medical image classification. In this study, we use a method of data fusion, transition learning and multi scale feature extraction to construct a deep learning model for medical image aided diagnosis. Multiple modal fusion decisions for finding complementary modal data fusion for the complementarity of multimodal images in diagnostic decisions can effectively improve diagnostic effects. Although there is a clear difference between natural and medical images through experiments, a model trained with a natural image dataset as an initialization of the network can ensure and converge the training. At the same time, improve the performance of the model on the test set. The multi-scale feature extraction model proposed in this paper enhances the robustness of the model and further improves the effect of medical image classification.
Read full abstract