Abstract

Many inverse problems in machine learning, system identification, and image processing include nuisance parameters, which are important for the recovering of other parameters. Separable nonlinear optimization problems fall into this category. The special separable structure in these problems has inspired several efficient optimization strategies. A well-known method is the variable projection (VP) that projects out a subset of the estimated parameters, resulting in a reduced problem that includes fewer parameters. The expectation maximization (EM) is another separated method that provides a powerful framework for the estimation of nuisance parameters. The relationships between EM and VP were ignored in previous studies, though they deal with a part of parameters in a similar way. In this article, we explore the internal relationships and differences between VP and EM. Unlike the algorithms that separate the parameters directly, the hierarchical identification algorithm decomposes a complex model into several linked submodels and identifies the corresponding parameters. Therefore, this article also studies the difference and connection between the hierarchical algorithm and the parameter-separated algorithms like VP and EM. In the numerical simulation part, Monte Carlo experiments are performed to further compare the performance of different algorithms. The results show that the VP algorithm usually converges faster than the other two algorithms and is more robust to the initial point of the parameters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call