Abstract

Feature representation and context incorporation play a key role in image semantic segmentation. We propose a framework for very high resolution image semantic segmentation which is able to combine learned internal representation and context in a unified manner. The proposed method is based on contextualized convolutional neural networks and high order conditional random field to learn feature representation and encode context feature respectively. We first devise an architecture to obtain learned feature representation which helps to keep semantic consistency within classes by encoding semantic edge context into convolutional neural networks. Then, the learned features and superpixel based higher order potential are integrated into a conditional random field to infer semantic segmentation. Both pixel-level and region-level potential are considered in conditional random field to address high variation within classes. Experimental results on public available semantic segmentation contest dataset demonstrated the effectiveness of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call