Abstract

Deep encoder–decoder neural networks like U-Nets have made significant contributions to the development of computer vision applications such as image segmentation. Neural architecture search (NAS) has the potential to further automatically adjust the architectures of U-Nets for various medical image segmentation tasks. Most of the NAS techniques focus on optimizing segmentation accuracies of network architectures. In real-world medical image segmentation scenarios, two main challenges are poor medical image quality and diverse deployment devices with different computing capabilities. A large architecture designed only for the high segmentation accuracy is difficult to run on various deployment devices. To address these challenges, this paper proposes a multi-objective evolutionary neural architecture search method (CTU-NAS) for U-Nets with diamond atrous convolution and Transformer for medical image segmentation. A hybrid U-Net architecture (CTU-Net) with diamond atrous convolution and Transformer modules is designed as the supernet of CTU-NAS. Then a channel search strategy based on sorting and selection is applied to speed up the search for subnets by precisely selecting and training the most important channels more times. In addition, CTU-NAS employs a dual acceleration mechanism based on weight sharing and surrogate model to lower the cost of evaluations of subnets. CTU-NAS applies a multi-objective evolutionary algorithm to balance between the segmentation accuracy and the number of parameters. Experimental results on two medical segmentation datasets show that CTU-NAS is capable of quickly generating a group of excellent network architectures with different sizes and their performances also outperform or come close to those of the manually designed networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call