Event Abstract Back to Event Color edge detection in natural scenes Thorsten Hansen1* and Karl R. Gegenfurtner1 1 Justus Liebig University, Department of General Psychology, Germany In a statistical analysis of over 700 natural scenes from the McGill calibrated color image database (Olmos and Kingdom, 2004, http://tabby.vision.mcgill.ca) we found that luminance and chromatic edges are statistically independent. These results show that chromatic edge contrast is an independent source of information that natural or artificial vision systems can linearly combine with other cues for the proper segmentation of objects (Hansen and Gegenfurtner, 2009, Visual Neuroscience).Here we investigate the contribution of color and luminance information to predict human-labeled edges. Edges were detected in three planes of the DKL color space (Lum, L-M, S-(L+M)) and compared to human-labeled edges from the Berkeley segmentation data set. We used a ROC framework for a threshold-independent comparison of edge detector responses (provided by the Sobel operator) to ground truth (given by the human marked edges). The average improvement as quantified by the difference between the areas under the ROC curves for pure luminance and luminance/chromatic edges was small. The improvement was only 2.7% if both L-M and S-(L+M) edges were used in addition to the luminance edges, 2.1% for simulated dichromats lacking an L-M channel, and 2.2% for simulated dichromats lacking an S-(L+M) channel. Interesting, the same improvement for chromatic information (2.5%) occurred if the ROC analysis was based on human-labeled edges in gray-scale images. Probably, observers use high-level knowledge to correctly mark edges even in the absence of a luminance contrast. While the average advantage of the additional chromatic channels was small, for some images a considerably higher improvement of up to 11% occurred. For few images the performance decreased. Overall, color was advantageous in 74% of the 100 images we evaluated. We interpret our results such that color information is on average beneficial for the detection of edges and can be highly useful and even crucial in special scenes. Conference: Bernstein Conference on Computational Neuroscience, Frankfurt am Main, Germany, 30 Sep - 2 Oct, 2009. Presentation Type: Oral Presentation Topic: Sensory processing Citation: Hansen T and Gegenfurtner KR (2009). Color edge detection in natural scenes. Front. Comput. Neurosci. Conference Abstract: Bernstein Conference on Computational Neuroscience. doi: 10.3389/conf.neuro.10.2009.14.141 Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters. The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated. Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed. For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions. Received: 28 Aug 2009; Published Online: 28 Aug 2009. * Correspondence: Thorsten Hansen, Justus Liebig University, Department of General Psychology, Giessen, Germany, thorsten.hansen@psychol.uni-giessen.de Login Required This action requires you to be registered with Frontiers and logged in. To register or login click here. Abstract Info Abstract The Authors in Frontiers Thorsten Hansen Karl R Gegenfurtner Google Thorsten Hansen Karl R Gegenfurtner Google Scholar Thorsten Hansen Karl R Gegenfurtner PubMed Thorsten Hansen Karl R Gegenfurtner Related Article in Frontiers Google Scholar PubMed Abstract Close Back to top Javascript is disabled. Please enable Javascript in your browser settings in order to see all the content on this page.
Read full abstract