Abstract
Approximate inference by decomposition of discrete graphical models and Lagrangian relaxation has become a key technique in computer vision. The resulting dual objective function is convenient from the optimization point-of-view, in principle. Due to its inherent non-smoothness, however, it is not directly amenable to efficient convex optimization. Related work either weakens the relaxation by smoothing or applies variations of the inefficient projected subgradient methods. In either case, heuristic choices of tuning parameters influence the performance and significantly depend on the specific problem at hand. In this paper, we introduce a novel approach based on bundle methods from the field of combinatorial optimization. It is directly based on the non-smooth dual objective function, requires no tuning parameters and showed a markedly improved efficiency uniformly over a large variety of problem instances including benchmark experiments. Our code will be publicly available after publication of this paper.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.