In referring segmentation, modeling the complicated constraints in the multimodal information is one of the most challenging problems. As the information in a given language expression and image becomes increasingly abundant, most of the current one-stage methods that directly output the segmentation mask encounter difficulties in understanding the complicated relationships between the image and the expression. In this work, we propose a PrimitiveNet to decompose the difficult global constraints into a set of simple primitives. Each primitive produces a primitive mask that represents only simple semantic meanings, e.g., all instances from the same category. Then, the output segmentation mask is computed by selectively combining these primitives according to the language expression. Furthermore, we propose a cross-primitive attention (CPA) module and a language-primitive attention (LPA) module to exchange information among all primitives and the language expression, respectively. The proposed CPA and LPA help the network find appropriate weights for primitive masks, so as to recover the target object. Extensive experiments have proven the effectiveness of our design and verified that the proposed network outperforms current state-of-the-art referring segmentation methods on three RefCOCO datasets.
Read full abstract