Abstract
The goal of the few-shot learning method is to learn quickly from a low-data regime. Structured output tasks like segmentation are challenging for few-shot learning, due to their being high-dimensional and statistically dependent. For this problem, we propose improved guided networks and combine them with a fully connected conditional random field (CRF). The guided network extracts task representations from annotated support images through feature fusion to do fast, accurate inference on new unannotated query images. By bringing together few-shot learning methods and fully connected CRFs, our method can do accurate object segmentation by overcoming poor localization properties of deep convolutional neural networks and can quickly updating tasks, without further optimization, when faced with new data. Our guided network is at the forefront of accuracy for the terms of annotation volume and time.
Highlights
In the context of deep learning, each class requires at least thousands of training samples to saturate the performance of convolutional neural networks on known categories
In order to solve the problem of small amount of training data and precise segmentation at the same time, we propose combining the few-shot learning method with fully connected pairwise conditional random fields (CRFs) proposed by Krähenbühl and Koltun [5], for its efficient computation and localization performance
We propose a new class of guided networks which combines fully connected CRFs
Summary
In the context of deep learning, each class requires at least thousands of training samples to saturate the performance of convolutional neural networks on known categories. In order to solve the problem of small amount of training data and precise segmentation at the same time, we propose combining the few-shot learning method with fully connected pairwise conditional random fields (CRFs) proposed by Krähenbühl and Koltun [5], for its efficient computation and localization performance We solve such a few-shot segmentation problem: just a little sparse pixelwise annotated support images for indicating the task are given, and segment unannotated images correspondingly. (2) we introduce a new mechanism for merging images and annotations, to improve learning time and inference accuracy and propagate pixels across different images; and (3) we combined the fully connected CRF behind the guided network, to improve the ability of the network to capture detailed features and achieve accurate segmentation of objects
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have