Event Abstract Back to Event Tracking Objects in Depth using Size Change Chen Zhang1* and Julian Eggert2 1 Darmstadt University of Technology, Control Theory and Robotics Lab, Germany 2 Honda Research Institute Europe GmbH, Germany Tracking an object in depth is an important task, since the distance to an object often correlates with an imminent danger, e.g. in the case of an approaching vehicle. A common way to estimate the depth of a tracked object is to utilize binocular methods like stereo disparity. In practice, however, depth measurement using binocular methods is technically expensive due to the need of camera calibration and rectification. In addition, higher depths are difficult to estimate because of the inverse relationship between disparity and depth.Here, we introduce an alternative approach for depth estimation, Depth-from-Size. This is a human-inspired monocular method where the depth is gained by utilizing the fact that object depth is proportional to the ratio of object physical size and object retinal size. Since both the physical size and the retinal size are unknown terms, they have to be measured and estimated together with the depth in a mutually interdependent manner. For each of the three terms specific measurement and estimation methods are probabilistically combined. This results in probability density functions (pdfs) at the output of three components for measuring and estimating these three terms, respectively.In every processing step, we use a 2D tracking system for first obtaining the object’s 2D position in the current monocular 2D image. On the position where the target object is found, the scaling factor of the object retinal size is measured by a pyramidal Lucas-Kanade-algorithm. In our setting, the object retinal size is the only observable subject to frequent measurements, whereas physical size and depth are internal states that have to be inferred by the system according to the constraint - depth / focal length = physical size / retinal size - that couples the three terms. Bayesian estimators are used to estimate the pdfs of the retinal size and the depth, whereas the physical size is gained by a mean estimator, since it is assumed to remain constant over time. Additional measurement inputs for the physical size and the depth are optional, acting as correcting evidences for these both terms.Measuring only the retinal size leaves us with an inherent ambiguity in the system, so that either the physical size or the depth must become available once at initialization. In our system, for this purpose we used a known object size or depth information gained by other depth cues like stereo disparity.The performance of the proposed approach was evaluated in two scenarios: An artificial with ground truth and a real-world scenario. In the latter, depth estimation performance of this system is compared with that of a directly measured stereo disparity. The evaluation results show that this approach is a reliable alternative to the standard stereo disparity approach for depth estimation with several advantages: 1) simultaneous estimation of depth, physical size and retinal size; 2) no stereo camera calibration and rectification; 3) good depth estimation at higher depth ranges for large objects. Conference: Bernstein Conference on Computational Neuroscience, Frankfurt am Main, Germany, 30 Sep - 2 Oct, 2009. Presentation Type: Poster Presentation Topic: Abstracts Citation: Zhang C and Eggert J (2009). Tracking Objects in Depth using Size Change. Front. Comput. Neurosci. Conference Abstract: Bernstein Conference on Computational Neuroscience. doi: 10.3389/conf.neuro.10.2009.14.026 Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters. The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated. Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed. For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions. Received: 25 Aug 2009; Published Online: 25 Aug 2009. * Correspondence: Chen Zhang, Darmstadt University of Technology, Control Theory and Robotics Lab, Darmstadt, Germany, czhang@rtr.tu-darmstadt.de Login Required This action requires you to be registered with Frontiers and logged in. To register or login click here. Abstract Info Abstract The Authors in Frontiers Chen Zhang Julian Eggert Google Chen Zhang Julian Eggert Google Scholar Chen Zhang Julian Eggert PubMed Chen Zhang Julian Eggert Related Article in Frontiers Google Scholar PubMed Abstract Close Back to top Javascript is disabled. Please enable Javascript in your browser settings in order to see all the content on this page.