Abstract

Recent works on deep conditional random fields (CRFs) have set new records on many vision tasks involving structured predictions. Here, we propose a fully connected deep continuous CRF model with task-specific losses for both discrete and continuous labeling problems. We exemplify the usefulness of the proposed model on multi-class semantic labeling (discrete) and the robust depth estimation (continuous) problems. In our framework, we model both the unary and the pairwise potential functions as deep convolutional neural networks (CNNs), which are jointly learned in an end-to-end fashion. The proposed method possesses the main advantage of continuously valued CRFs, which is a closed-form solution for the maximum a posteriori (MAP) inference. To better take into account the quality of the predicted estimates during the cause of learning, instead of using the commonly employed maximum likelihood CRF parameter learning protocol, we propose task-specific loss functions for learning the CRF parameters. It enables direct optimization of the quality of the MAP estimates during the learning process. Specifically, we optimize the multi-class classification loss for the semantic labeling task and the Tukey's biweight loss for the robust depth estimation problem. Experimental results on the semantic labeling and robust depth estimation tasks demonstrate that the proposed method compare favorably against both baseline and state-of-the-art methods. In particular, we show that although the proposed deep CRF model is continuously valued, with the equipment of task-specific loss, it achieves impressive results even on discrete labeling tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call