Abstract

In this paper, we are interested in exploring the problem of full-resolution image segmentation, with the focus placed on learning full-resolution representations for biomedicine images. We divide the original resolution image into patches of different sizes in different stages and then extracte local features from large to small patches using efficient and flexible components in modern convolutional neural networks (CNN). Meanwhile, a multilayer perceptron (MLP) block intended for modeling long-range dependencies between patches is designed to compensate for the inherent inductive bias caused by convolution operations. In addition, we perform multi-scale fusion and receive representation information from parallel paths at each stage, resulting in a rich full-resolution representation. We evaluate the proposed method on different biomedical image segmentation tasks and it achieves a competitive performance compared to the latest deep learning segmentation methods. It is hoped that this method will serve as a useful alternative to biomedical image segmentation and provide an improved idea for the research based on full-resolution representation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call