Abstract
The encoder-decoder model is a commonly used Deep learning (DL) model for medical image segmentation. Encoder-decoder models make pixel-wise predictions that focus heavily on local patterns. As a result, the predicted mask often fails to preserve the object's shape and topology, which requires an understanding of the image's global context. In this work, we propose a Fourier Coefficient Segmentation Network (FCSN)---a novel global context-aware DL model that segments an object by learning the complex Fourier Coefficients of the object's masks. The Fourier coefficients are calculated by integrating over the mask's contour. Hence, FCSN is naturally motivated to incorporate a broader image context when estimating the coefficients. The global context awareness of FCSN helps produce more accurate segmentation and is more robust to local perturbations, such as additive noise or motion blur. We compare FCSN on other state-of-the-art global context-aware models (UNet++, DeepLabV3+, UNETR) on 5 medical image segmentation tasks (ISIC_2018, RIM_CUP, RIM_DISC, PROSTATE, FETAL). When compared with UNETR, FCSN attains significantly lower Hausdorff scores with 19.14 (6%), 17.42 (6%), 9.16 (14%), 11.18 (22%), and 5.98 (6%) for ISIC_2018, RIM_CUP, RIM_DISC, PROSTATE, and FETAL tasks respectively. Moreover, FCSN is lightweight by discarding the decoder module. FCSN only requires 29.7 M parameters which are 75.6 M and 9.9 M fewer than UNETR and DeepLabV3+, respectively. FCSN attains inference/training speeds of 1.6 ms/img and 6.3 ms/img, which is 8 and 3 faster than UNet and UNETR. Our work is available at https://github.com/nus-morninlab/FCSN.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.