Abstract

Semantic image segmentation is an important yet unsolved problem. One of the major challenges is the large variability of the object scales. To tackle this scale problem, we propose a Scale-Adaptive Network (SAN) which consists of multiple branches with each one taking charge of the segmentation of the objects of a certain range of scales. Given an image, SAN first computes a dense scale map indicating the scale of each pixel which is automatically determined by the size of the enclosing object. Then the features of different branches are fused according to the scale map to generate the final segmentation map. To ensure that each branch indeed learns the features for a certain scale, we propose a scale-induced ground-truth map and enforce a scale-aware segmentation loss for the corresponding branch in addition to the final loss. Extensive experiments over the PASCAL-Person-Part, the PASCAL VOC 2012, and the Look into Person datasets demonstrate that our SAN can handle the large variability of the object scales and outperforms the state-of-the-art semantic segmentation methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call