Abstract

Ultrasound imaging is the most commonly used technique to detect abnormalities of the thyroid and diagnose abnormal breasts. However, due to the inherent characteristics of ultrasound imaging, the contrast of ultrasound images is relatively low, and the tissue boundaries in the image are blurry. This makes it a challenging task to segment thyroid or breast lesions from ultrasound images. In this work, we explore the role of multimodal features at different levels of deep neural network models and their fusion mechanisms for multimodal images generated by ultrasound radio frequency (RF) data. The ultrasound RF data contain information related to the arrangement of scatterers in tissues and organs, which is only partially available in B-mode images. To solve this problem, we use ultrasound RF data to generate parametric images, including Nakagami and entropy images, to illustrate the abundant backscattered statistics contained in RF data. A channel-aware fusion module is proposed to adaptively fuse the features from images of different modalities. Experimental results show that the abundant information contained in RF data can be helpful to identify thyroid glands or breast lesions with low contrast and fuzzy boundaries in B-mode ultrasound images and improve the segmentation performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call