Abstract

Image segmentation is typically used to locate objects and boundaries. It is essential in many clinical applications, such as the pathological diagnosis of hepatic diseases, surgical planning, and postoperative assessment. The segmentation task is hampered by fuzzy boundaries, complex backgrounds, and appearances of objects of interest, which vary considerably. The success of the procedure is still highly dependent on the operator’s skills and the level of hand–eye coordination. Thus, this paper was strongly motivated by the necessity to obtain an early and accurate diagnosis of a detected object in medical images. In this paper, we propose a new polyp segmentation method based on the architecture of a multiple deep encoder–decoder networks combination called CDED-net. The architecture can not only hold multi-level contextual information by extracting discriminative features at different effective fields-of-view and multiple image scales but also learn rich information features from missing pixels in the training phase. Moreover, the network is also able to capture object boundaries by using multiscale effective decoders. We also propose a novel strategy for improving the method’s segmentation performance based on a combination of a boundary-emphasization data augmentation method and a new effective dice loss function. The goal of this strategy is to make our deep learning network available with poorly defined object boundaries, which are caused by the non-specular transition zone between the background and foreground regions. To provide a general view of the proposed method, our network was trained and evaluated on three well-known polyp datasets, CVC-ColonDB, CVC-ClinicDB, and ETIS-Larib PolypDB. Furthermore, we also used the Pedro Hispano Hospital (PH2), ISBI 2016 skin lesion segmentation dataset, and CT healthy abdominal organ segmentation dataset to depict our network’s ability. Our results reveal that the CDED-net significantly surpasses the state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.