Abstract

This article proposes a multimode medical image fusion with CNN and supervised learning, in order to solve the problem of practical medical diagnosis. It can implement different types of multimodal medical image fusion problems in batch processing mode and can effectively overcome the problem that traditional fusion problems that can only be solved by single and single image fusion. To a certain extent, it greatly improves the fusion effect, image detail clarity, and time efficiency in a new method. The experimental results indicate that the proposed method exhibits state-of-the-art fusion performance in terms of visual quality and a variety of quantitative evaluation criteria. Its medical diagnostic background is wide.

Highlights

  • With the continuous recommendation of medical image processing research in recent years, image fusion is an effective solution that automatically detects the information in different images and integrates them to produce one composite image in which all objects of interest are clear

  • The recent research in image fusion based on deep learning is as follows: pixel-level image fusion, convolutional neural networks (CNNs) (Liu et al, 2017; Cheng et al, 2018; Hou et al, 2018; Schlemper et al, 2018; Shao and Cai, 2018; Sun et al, 2019), convolutional sparse representation (Li and Wu, 2019), stacked autoencoders (Ahmad et al, 2017; Bao et al, 2017; Jiao et al, 2018; Ma et al, 2018; Chen et al, 2019; Kosiorek et al, 2019), and deep belief network (DBN) (Dong et al, 2016; Chen and Li, 2017; Ye et al, 2018)

  • The results show that our methods are effective in image fusion

Read more

Summary

INTRODUCTION

With the continuous recommendation of medical image processing research in recent years, image fusion is an effective solution that automatically detects the information in different images and integrates them to produce one composite image in which all objects of interest are clear. An effective method (Hou et al, 2018) to encode the spatiotemporal information of a skeleton sequence into color texture images is proposed, referred to as skeleton optical spectra, and employs CNNs (ConvNets) to learn the discriminative features for action recognition This is a typical result of action recognition research. In order to effectively meet the needs of the aforementioned medical images and make tentative research on the development of automatic diagnostic technology, supervised deep learning methods were used to achieve image fusion. It consists of two major steps: model learning and fusion test. In the second fusion test process, the multiple groups of test images are entered into the model that the learning and training have achieved, and the fusion process is completed.

Experimental Results on CT and SPECT Images
CONCLUSION
ETHICS STATEMENT
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call