This paper presents a novel approach to depth estimation using a multiple color-filter aperture (MCA) camera and its application to multifocusing. An image acquired by the MCA camera contains spatially varying misalignment among RGB color channels, where the direction and length of the misalignment is a function of the distance of an object from the plane of focus. Therefore, if the misalignment is estimated from the MCA output image, multifocusing and depth estimation become possible using a set of image processing algorithms. We first segment the image into multiple clusters having approximately uniform misalignment using a color-based region classification method, and then find a rectangular region that encloses each cluster. For each of the rectangular regions in the RGB color channels, color shifting vectors are estimated using a phase correlation method. After the set of three clusters are aligned in the opposite direction of the estimated color shifting vectors, the aligned clusters are fused to produce an approximately in-focus image. Because of the finite size of the color-filter apertures, the fused image still contains a certain amount of spatially varying out-of-focus blur, which is removed by using a truncated constrained least-squares filter followed by a spatially adaptive artifacts removing filter. Experimental results show that the MCA-based multifocusing method significantly enhances the visual quality of an image containing multiple objects of different distances, and can be fully or partially incorporated into multifocusing or extended depth of field systems. The MCA camera also realizes single camera-based depth estimation, where the displacement between multiple apertures plays a role of the baseline of a stereo vision system. Experimental results show that the estimated depth is accurate enough to perform a variety of vision-based tasks, such as image understanding, description, and robot vision.
Read full abstract