Abstract

We study the energy minimization problem in low-level vision tasks from a novel perspective. We replace the heuristic regularization term with a data-driven learnable subspace constraint, and preserve the data term to exploit domain knowledge derived from the first principles of a task. This learning subspace minimization (LSM) framework unifies the network structures and the parameters for many different low-level vision tasks, which allows us to train a single network for multiple tasks simultaneously with shared parameters, and even generalizes the trained network to an unseen task as long as the data term can be formulated. We validate our LSM frame on four low-level tasks including edge detection, interactive segmentation, stereo matching, and optical flow, and validate the network on various datasets. The experiments demonstrate that the proposed LSM generates state-of-the-art results with smaller model size, faster training convergence, and real-time inference.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call