Abstract

Factor analysis is used in a number of applications. One example is image recognition, where it is often necessary to learn representations of the underlying components of images, such as objects, object-parts, or features. Another example is data compression when original data is transformed into a space of lower dimension. The goal of factor analysis is to find the underlying factors (factor loadings) and the contributions of these factors into the original observations (factor scores). Recently, we have proposed the method of Boolean factor analysis based on the ability of the Hopfield-like network to create attractors for factors [19]. It shows that an obstacle to using this network for Boolean factor analysis is the appearance of two global spurious attractors that have no relation to internal structure of analyzed signals. To eliminate these attractors we had to modify the common architecture of Hopfield network, adding a special inhibitory neuron. The existence of two global attractors and their elimination by the special inhibitory neuron were illustrated by Frolov et al. [19] only by some computer simulations. Since the appearance of those attractors is a novel important phenomenon, in this paper we investigate it both analytically and by additional computer simulations, to prove its validity, and explain its origin.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call