Abstract

An important step in the solution of a matrix nearness problem that arises in certain machine learning applications is finding the zero of $f(\alpha) = \bm{z}^T \exp(\log X + \alpha\bm{z}\bm{z}^T)\bm{z} - b$. The matrix valued exponential and logarithm in $f(\alpha)$ arises from the use of the von Neumann matrix divergence $\operatorname{tr}(X \log X - X \log Y - X + Y)$ to measure the nearness between the positive definite matrices $X$ and $Y$. A key step of an iterative algorithm used to solve the underlying matrix nearness problem requires the zero of $f(\alpha)$ to be repeatedly computed. In this paper we propose zero-finding algorithms that gain their advantage by exploiting the special structure of the objective function. We show how to efficiently compute the derivative of $f$, thereby allowing the use of Newton-type methods. In numerical experiments we establish the advantage of our algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.