Abstract

Manifold Learning via Multi-Penalty Regularization

Highlights

  • Let X be a compact metric space and Y ⊂ R with the joint probability measure ρ on Z = X ×Y

  • Many regularization parameter selection approaches are discussed for multi-penalized ill-posed inverse problems such as discrepancy principle [15, 31], quasi-optimality principle [18, 32], balanced-discrepancy principle [33], heuristic L-curve [34], noise structure based parameter choice rules [35, 36, 37], some approaches which require reduction to single-penalty regularization [38]

  • We discuss the penalty balancing principle (PB-principle) to choose the regularization parameters in our learning theory framework which is considered for multi-penalty regularization in ill-posed problems [33]

Read more

Summary

Introduction

Caponnetto et al [6] improved the error estimates to optimal convergence rates for regularized least-square algorithm using the polynomial decay condition of eigenvalues of the integral operator. Many regularization parameter selection approaches are discussed for multi-penalized ill-posed inverse problems such as discrepancy principle [15, 31], quasi-optimality principle [18, 32], balanced-discrepancy principle [33], heuristic L-curve [34], noise structure based parameter choice rules [35, 36, 37], some approaches which require reduction to single-penalty regularization [38]. We discuss the penalty balancing principle (PB-principle) to choose the regularization parameters in our learning theory framework which is considered for multi-penalty regularization in ill-posed problems [33]

Mathematical Preliminaries and Notations
Convergence Analysis
Parameter Choice Rules
Numerical Realization
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call