Abstract

Estimating a set of orthogonal functions from a finite set of noisy data plays a crucial role in several areas such as imaging, dictionary learning and compressed sensing. The problem turns out especially hard due to its intrinsic non-convexity. In this paper, we solve it by recasting it in the framework of multi-task learning in Hilbert spaces, where orthogonality plays a role as inductive bias. Two perspectives are analyzed. The first one is mainly theoretic. It considers a formulation of the problem where non-orthogonal function estimates are seen as noisy data belonging to an infinite-dimensional space from which orthogonal functions have to be reconstructed. We then provide results concerning the existence and the convergence of the optimizers. The second one is more oriented towards applications. It consists in a learning scheme where orthogonal functions are directly inferred from a finite amount of noisy data. It relies on regularization in reproducing kernel Hilbert spaces and on the introduction of special penalty terms promoting orthogonality among tasks. The problem is then cast in a Bayesian framework, overcoming non-convexity through an efficient Markov chain Monte Carlo scheme. If orthogonality is not certain, our scheme can also understand from data if such form of task interaction really holds.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.