Abstract

We present a novel deep learning model equipped with a new region-aware global context modeling technique for automatic nerve segmentation from ultrasound images, which is a challenging task due to (1) the large variation and blurred boundaries of targets, (2) the large amount of speckle noise in ultrasound images, and (3) the inherent real-time requirement of this task. It is essential to efficiently capture long-range dependencies by global context modeling for a segmentation network to overcome these challenges. Traditional global context modeling techniques usually explore pixel-aware correlations to establish long-range dependencies, which are usually computation-intensive and greatly degrade time performance. In addition, in this application, pixel-aware modeling may inevitably introduce much speckle noise in the computation and potentially degrade segmentation performance. In this paper, we propose a novel region-aware modeling technique to establish long-range dependencies based on different regions to improve segmentation accuracy while maintaining real-time performance; we call it region-aware pyramid aggregation (RPA) module. In order to adaptively divide the feature maps into a set of semantic-independent regions, we develop an attention mechanism and integrate it into the spatial pyramid network to evaluate the semantic similarity of different regions. We further develop an adaptive pyramid fusion (APF) module to dynamically fuse the multi-level features generated from the decoder to refining the segmentation results. We conducted extensive experiments on a famous public ultrasound nerve image segmentation dataset. Experimental results demonstrate that our method consistently outperforms our rivals in terms of segmentation accuracy. The code is available at https://github.com/jsonliu-szu/RAGCM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call