Abstract

Deep learning (DL)-based multisource information processing plays an essential role in the Internet of Medical Things (IoMT). In this field, medical image fusion integrates scan results from different devices, supporting healthcare systems to make a more informed diagnosis. This study proposes a DL-based network to fuse multimodal medical images. In our method, the lattice unit (LU) is designed to improve the representation capability of the fusion network. Moreover, to acquire hierarchical features from images, the progressive module (PM) treats the network's shallow and deep layers differently. The shallow layers represent the structure of the source image; the deep layers correspond to details. Different loss functions are utilized for these two kinds of information to retain fused images' salient structures and functional information. Experiments show that the proposed algorithm performs well on visual quality and objective evaluation, which provides a reliable reference for medical diagnosis. In addition, this method is lightweight and speedy compared to existing algorithms, facilitating further placement in the IoMT's specific devices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call