Abstract

Image segmentation is deemed an important task in biomedicine, often required for proper diagnosis and prognosis of many diseases. Deep learning (DL) based segmentation methods have received considerable attention in recent years due to the increasing availability of clinical datasets. Many novel ideas have been proposed over the years driving progress in the field of automatic segmentation research. Contrary to the theme of contemporary literature, we demonstrate that considering the background tissue segmentation task alongside the main foreground task can improve overall segmentation performance when considered from a general medical image segmentation perspective. We, therefore, propose a DL framework called Twin Segmentation Network (Twin-SegNet) that ties together two streams (foreground and background) through an image reconstruction task. A boxed Mean Squared Error loss is proposed to complement the dice losses from both streams. We furthermore propose a Wavelet Convolutional Block (WCB) to enhance the edge information extracting capabilities of both streams and also a Partial Channel Recalibration (PCR) block to allow mutual feature exchange between the two streams so that each stream can emphasize more on channels with more discriminative and relevant features. We present experimental results on five public datasets: BUSI, GLAS, ISIC-2018, MoNuSeg, and CVC-ClinicDB. Unlike conventional baselines that demonstrate convincing performance in some datasets and poor performance in others, Twin-SegNet is able to consistently achieve state-of-the-art results with impressive F1 scores of 88.46%, 93.11%, 91.61%, 81.78% and 94.69% on each dataset respectively, showing its great potential as a general segmentation framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call