Abstract

Dimensionality reduction techniques such as principal component analysis and factor analysis are used to discover a linear mapping between high-dimensional data samples and points in a lower-dimensional subspace. Previously, Frey and Jojic introduced transformation-invariant component analysis (TCA) to learn a linear mapping, invariant to a set of known form of global transformations. However, parameter estimation in that model using the previously-proposed expectation maximization (EM) algorithm required scalar operations in the order of N2 where N is the dimensionality of each training example. This is prohibitive for many applications of interest such as modeling mid-to large-size images, where, for instance, i>N may be as high as 786432 (512×512 RGB image). In this paper, we present an efficient algorithm that reduces the computational requirements to order of NlogN. With this speedup, we show the effectiveness of transformation-invariant component analysis in various applications including tracking, learning video textures, clustering, object recognition and object detection in images. Software for TCA can be downloaded from http://www.psi.toronto.edu/fastTCA.htm

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call