Abstract

We introduce the Lipschitz matrix: a generalization of the scalar Lipschitz constant for functions with many inputs. Among the Lipschitz matrices compatible a particular function, we choose the smallest such matrix in the Frobenius norm to encode the structure of this function. The Lipschitz matrix then provides a function-dependent metric on the input space. Altering this metric to reflect a particular function improves the performance of many tasks in computational science. Compared to the Lipschitz constant, the Lipschitz matrix reduces the worst-case cost of approximation, integration, and optimization; if the Lipschitz matrix is low-rank, this cost no longer depends on the dimension of the input, but instead on the rank of the Lipschitz matrix defeating the curse of dimensionality. Both the Lipschitz constant and matrix define uncertainty away from point queries of the function and by using the Lipschitz matrix we can reduce uncertainty. If we build a minimax space-filling design of experiments in the Lipschitz matrix metric, we can further reduce this uncertainty. When the Lipschitz matrix is approximately low-rank, we can perform parameter reduction by constructing a ridge approximation whose active subspace is the span of the dominant eigenvectors of the Lipschitz matrix. In summary, the Lipschitz matrix provides a new tool for analyzing and performing parameter reduction in complex models arising in computational science.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.