Abstract

Sketch semantic segmentation is a basic computer vision task, which poses great challenges due to the abstraction of sketches and different drawing styles. In this paper, we propose a sketch semantic segmentation method based on point-segment level interaction. Specifically, an enhanced local feature aggregation (ELFA) module is developed based on two kinds of distance information between the neighboring points/segments (NPs/NSs) and the corresponding center points/segments (CPs/CSs). The ELFA module not only extracts local features adequately, but also takes into account the different effects of the NPs/NSs on the corresponding CPs/CSs. Based on the ELFA module, a point-level branch and a segment-level branch are established to encode the semantics of sketches from a point level and a segment level respectively, which makes the features extracted from the point-level branch complementary to those extracted from the segment-level branch. Further, a point-segment level interaction (PSLI) module is designed to interchange the information of the two levels and to reduce the losing of some important semantic details caused by feature selection of multiple stages. The PSLI module can be placed in several stages, which is beneficial to retain and utilize the useful details. Finally, point-level features and segment-level features are fused to obtain the semantic segmentation result. Extensive experiments on SPG and SketchSeg-150K show that the proposed method achieves state-of-the-art performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call