Abstract
Optimal Transport (OT) is dedicated to solving how to transform one measure to another with least cost. OT has gained wide applications in machine learning. However, the heavy computation burden of primal OT distance makes it prohibitive for prevalent high-dimensional problems. Recent work imposes an entropic regularization term on primal OT, obtaining a strictly convex problem that can be addressed with stochastic optimization. We focus on the optimization methods for discrete OT and semi-discrete OT with regularization. Instead of the initial SAG method for discrete OT, we apply SAGA, which solves the biased gradient estimation problem, and SVRG, which solves the memory problem. Besides, we define the regret in semi-discrete OT problem and solve it from the perspective of online learning. Our study, to our best knowledge, is the first effort to solve the semi-discrete OT problem with follow-the-regularized-leader thought. Our FTRL-\textit{current} algorithm shows faster convergence than existing algorithms for solving semi-discrete OT problem.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.