Abstract

This paper presents a simple yet effective multilayer perceptron (MLP) architecture, namely CycleMLP, which is a versatile neural backbone network capable of solving various tasks of dense visual predictions such as object detection, segmentation, and human pose estimation. Compared to recent advanced MLP architectures such as MLP-Mixer [89], ResMLP [90], and gMLP [58], whose architectures are sensitive to image size and are infeasible in dense prediction tasks, CycleMLP has two appealing advantages. (1) CycleMLP can cope with various spatial sizes of images. (2) CycleMLP achieves linear computational complexity with respect to the image size by using local windows. In contrast, previous MLPs have O(N2) computational complexity due to their full connections in space. (3) The relationship between convolution, multi-head self-attention in Transformer, and CycleMLP are discussed through an intuitive theoretical analysis. We build a family of models that can surpass state-of-the-art MLP and Transformer models e.g., Swin Transformer [60], while using fewer parameters and FLOPs. CycleMLP expands the MLP-like models' applicability, making them versatile backbone networks that achieve competitive results on dense prediction tasks For example, CycleMLP-Tiny outperforms Swin-Tiny by 1.3% mIoU on ADE20K dataset with fewer FLOPs. Moreover, CycleMLP also shows excellent zero-shot robustness on ImageNet-C dataset. The source codes and models are available at https://github.com/ShoufaChen/CycleMLP.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call