Abstract

Across industries, creating new datasets often becomes necessary, e.g., for specialized machine learning tasks where no datasets are readily available. However, creating a dataset can be very complex and time-consuming. Especially for semantic segmentation tasks, when every pixel needs to be manually labeled. To quickly arrive at an initial model, it is desirable to drop most of the labeling requirements, e.g., by allowing sparse (lazy) annotations like scribbles. Sometimes these initial models already produce satisfactory results, in other cases, they could help to refine further and extend the dataset. However, segmentation models typically do not accept sparse labels for training. We propose a simple dropout procedure that allows a model to handle scribble and point annotations. The procedure focuses on being easily transferable to existing semantic segmentation implementations and methods. As a representative example, a chicken semantic segmentation dataset was employed for experiments. Although the model is trained on sparse annotation masks, it predicts dense masks. The result is a standalone model that does not require further human assistance after training. Two models trained on point annotations and scribbles were compared to a baseline model trained on dense masks. Although the baseline model performs best, our method can achieve comparable results with just one percent of labeled pixels. This holds as long as the sparse labels have good coverage.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call