Abstract

There are several Independent Component Analysis (ICA) algorithms based on different approaches that are used to estimate the independent components, up to some precision, from a linear mixture of independent components as long as the independent components do not follow Gaussian distribution. Some approaches, however, work better than the other if data distribution and characteristics follow a certain pattern. From the mixture of data comprising two or more independent components it is quite hard, if not impossible, to find out the distribution of the independent components accurately and therefore to characterize one or more ICA approaches to be better than others for certain type of data. This paper describes a framework for ICA algorithms as proposed by Ejaz [1]. In this study we have characterized four different ICA algorithms, with some pre-selected fixed parameters, based on different approaches for a number of different datasets that are linearly mixed with two to five independent components that follow a number of different distributions. ICA algorithms used for the research include FastICA, Extended Infomax, JADE, and Kernel ICA based on canonical correlation analysis. All of these algorithms are discussed briefly yet covering most of their important aspects such that this paper also serves as a tutorial for these ICA algorithms for novice readers. We have done an extensive statistical analysis for more than 300 different datasets to characterize them for one or more of the four algorithms based on to which algorithm estimates the independent components closest to the original components.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call