Abstract

Fully Convolutional Networks (FCNs) have emerged as powerful segmentation models but are usually designed manually, which requires extensive time and can result in large and complex architectures. There is a growing interest to automatically design efficient architectures that can accurately segment 3D medical images. However, most approaches either do not fully exploit volumetric information or do not optimize the model’s size. To address these problems, we propose a self-adaptive 2D–3D ensemble of FCNs called AdaEn-Net for 3D medical image segmentation that incorporates volumetric data and adapts to a particular dataset by optimizing both the model’s performance and size. The AdaEn-Net consists of a 2D FCN that extracts intra-slice information and a 3D FCN that exploits inter-slice information. The architecture and hyperparameters of the 2D and 3D architectures are found through a multiobjective evolutionary based algorithm that maximizes the expected segmentation accuracy and minimizes the number of parameters in the network. The main contribution of this work is a model that fully exploits volumetric information and automatically searches for a high-performing and efficient architecture. The AdaEn-Net was evaluated for prostate segmentation on the PROMISE12 Grand Challenge and for cardiac segmentation on the MICCAI ACDC challenge. In the first challenge, the AdaEn-Net ranks 9 out of 297 submissions and surpasses the performance of an automatically-generated segmentation network while producing an architecture with 13× fewer parameters. In the second challenge, the proposed model is ranked within the top 8 submissions and outperforms an architecture designed with reinforcement learning while having 1.25× fewer parameters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call