Abstract
Sparse Blind Source Separation (sparse BSS) is a key method to analyze multichannel data in fields ranging from medical imaging to astrophysics. However, since it relies on seeking the solution of a non-convex penalized matrix factorization problem, its performances largely depend on the optimization strategy. In this context, Proximal Alternating Linearized Minimization (PALM) has become a standard algorithm which, despite its theoretical grounding, generally provides poor practical separation results. In this work, we first investigate the origins of these limitations, which are shown to take their roots in the sensitivity to both the initialization and the regularization parameter choice. As an alternative, we propose a novel strategy that combines a heuristic approach with PALM. We show its relevance on realistic astrophysical data.
Highlights
RS controls the trade-off between the data fidelity and the sparsity terms. It can be decomposed into RS = ΛSW where ΛS is a diagonal matrix of the regularization parameters λ1, λ2, ..., λn and W is a matrix used to introduce individual penalization coefficients in the context of reweighted 1 [8]
Any practitioner can draw the same conclusion: the solution of sparse BSS methods is highly sensitive to the initial point and the values of the regularization parameters, which are generally tricky to tune without any first guess of the solution
We show that the ability of recent and theoretically grounded optimization strategies like Proximal Alternating Linearized Minimization (PALM) to provide solutions to sparse BSS problems is highly sensitive to both the initialization and the values of the regularization parameters
Summary
In the BSS [1] framework, the data are composed of m observations, each of which has t samples. In matrix form, the goal is to find two matrices S (of size n × t) and A (of size m×n), called respectively the source and the mixing matrices, such that: X = AS + N, where X (of size m × t) is the observation matrix that is corrupted with some unkwown noise N. RS controls the trade-off between the data fidelity and the sparsity terms It can be decomposed into RS = ΛSW where ΛS is a diagonal matrix of the regularization parameters λ1, λ2, ..., λn and W is a matrix used to introduce individual penalization coefficients in the context of reweighted 1 [8] (when no reweighting is used, W is equal to the identity matrix)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.