Abstract

This paper presents a stable and fast algorithm for independent component analysis with reference (ICA-R). This is a technique for incorporating available reference signals into the ICA contrast function so as to form an augmented Lagrangian function under the framework of constrained ICA (cICA). The previous ICA-R algorithm was constructed by solving the optimization problem via a Newton-like learning style. Unfortunately, the slow convergence and potential misconvergence limit the capability of ICA-R. This paper first investigates and probes the flaws of the previous algorithm and then introduces a new stable algorithm with a faster convergence speed. There are two other highlights in this paper: first, new approaches, including the reference deflation technique and a direct way of obtaining references, are introduced to facilitate the application of ICA-R; second, a new method is proposed that the new ICA-R is used to recover the complete underlying sources with new advantages compared with other classical ICA methods. Finally, the experiments on both synthetic and real-world data verify the better performance of the new algorithm over both previous ICA-R and other well-known methods.

Highlights

  • Independent component analysis (ICA) is a data analysis technique for uncovering independent components (ICs) which underlie the observational data [1][2][43]

  • Testing with self-supplied reference Four synthetic sources were respectively depicted in Figure 2 (a), which could be considered independently to each other, where Sources 1 and 3 are with sub-Gaussian distribution, and the rests are with super-Gaussian distribution

  • If j was set to 20.5, the desired IC could be extracted by independent component analysis with reference (ICA-R) with reference, xi (i~1,2,3,4)

Read more

Summary

Introduction

Independent component analysis (ICA) is a data analysis technique for uncovering independent components (ICs) which underlie the observational data [1][2][43]. This technique finds a mutually independent representation of the original data by seeking a linear transformation. There are generally two version functions, f (y), chosen as the negentropy approximation function The corresponding augmented Lagrangian function is: L(W,m,l)~J(y)zG(y : W,m)zH(y : W,l) ðS1 À 4Þ where m and l are Lagrangian multipliers for inequality and ePqulpa~l1itfy(

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call