Event Abstract Back to Event Correspondence Priors for Binocular Image Data via Canonical Correlation Analysis Christian Conrad1* and Rudolf Mester1 1 J.W.G University Frankfurt, Visual Sensorics and Information Processing Group, Germany In this work, we study unsupervised learning of correspondences relations in binocular video streams. This is useful for low level vision tasks in stereo vision or motion estimation but also for analysis of fMRI data. Still an open question is, how correspondence relations evolve and are represented in the human brain. Correspondence estimation is often based on the principle of identifying corresponding pixels or patches and appears in many forms: in stereo vision pixel correspondences among a pair of images taken at the same point in time serve to determine a depth map [LucasKanade1981]. In motion estimation, pixel correspondences among consecutive images are sought [HornSchunk1981]. Most spatial feature approaches to correspondence estimation are based on the prototypical detection and matching framework. Here, correspondences are typically not determined using the raw pixel representation but rather rich feature descriptors. In the past, feature detectors have often been designed based on statistical (and biological) principles [Schmid2000,Mikolajczyk2004]. Today, there is an increased interest in unsupervised learning of such features directly from data based on energy-models and probabilistic generative models [OlshausenField1997,Hyvaerinen2009]. Furthermore, unsupervised feature learning is not restricted to model the actual image content but can also be used to learn the relationship between pairs of images , where images may be related via a spatial transformation, depth map, or the optical flow [Susskind2011,Memisevic2012]. In contrast to probabilistic methods for unsupervised feature learning, often involving rather sophisticated machinery and optimization schemes, we present a sampling-free algorithm based on Canonical Correlation Analysis (CCA, [Hotelling1936]), and show how 'correspondence priors' can be determined in closed form. Specifically, given video streams of two views, our algorithm first determines pixel correspondences on a coarse scale via learning the inter-image transformation employing CCA. Subsequently it projects those correspondences to the original resolution. After learning, for each point in video channel A, regions of high probability containing the true correspondence are determined, thus forming correspondence priors. While CCA only allows us to learn the inter-image transformation implicitly, we show how to apply a learnt transformation to previously unseen data in a principled way. Correspondence priors are efficiently encoded using second order statistics and may then be plugged into probabilistic and energy based formulations of specific vision applications. In contrast popular probabilistic models, our algorithm can be applied in closed form, only involving QR decomposition or the SVD, thus making it especially suitable within real world applications. We experimentally verify the applicability of the approach in several real world scenarios where the binocular views may be subject to substantial spatial transformations. Acknowledgements We gratefully acknowledge funding by the German Federal Ministry of Education and Research (BMBF) in the project Bernstein Fokus Neurotechnologie -- Frankfurt Vision Initiative 01GQ0841. Keywords: CCA, Correspondence Estimation, Unsupervised Feature Learning Conference: Bernstein Conference 2012, Munich, Germany, 12 Sep - 14 Sep, 2012. Presentation Type: Poster Topic: Data analysis, machine learning, neuroinformatics Citation: Conrad C and Mester R (2012). Correspondence Priors for Binocular Image Data via Canonical Correlation Analysis. Front. Comput. Neurosci. Conference Abstract: Bernstein Conference 2012. doi: 10.3389/conf.fncom.2012.55.00124 Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters. The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated. Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed. For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions. Received: 11 May 2012; Published Online: 12 Sep 2012. * Correspondence: Mr. Christian Conrad, J.W.G University Frankfurt, Visual Sensorics and Information Processing Group, Frankfurt, Germany, christian.conrad@vsi.cs.uni-frankfurt.de Login Required This action requires you to be registered with Frontiers and logged in. To register or login click here. Abstract Info Abstract The Authors in Frontiers Christian Conrad Rudolf Mester Google Christian Conrad Rudolf Mester Google Scholar Christian Conrad Rudolf Mester PubMed Christian Conrad Rudolf Mester Related Article in Frontiers Google Scholar PubMed Abstract Close Back to top Javascript is disabled. Please enable Javascript in your browser settings in order to see all the content on this page.
Read full abstract