Abstract

Seeing an object in a cluttered scene with severe occlusion is a significantly challenging task for many computer vision applications. Although camera array synthetic aperture imaging has proven to be an effective way for occluded object imaging, its imaging quality is often significantly decreased by the shadows of the foreground occluder. To overcome this problem, some recent research has been presented to label the foreground occluder via object segmentation or 3D reconstruction. However, these methods usually fail in the case of complicated occluder or severe occlusion. In this paper, we present a novel optimal camera selection algorithm to handle the problem above. Firstly, in contrast to the traditional synthetic aperture photography methods, we formulate the occluded object imaging as a problem of visible light ray selection from the optimal camera view. To the best of our knowledge, this is the first time to “mosaic” a high quality occluded object image via selecting multi-view optimal visible light rays from a camera array or a single moving camera. Secondly, a greedy optimization framework is presented to propagate the visibility information among various depth focus planes. Thirdly, a multiple label energy minimization formulation is designed in each plane to select the optimal camera view. The energy is estimated in the 3D synthetic aperture image volume and integrates the multiple view intensity consistency, previous visibility property and camera view smoothness, which is minimized via graph cuts. Finally, we compare this approach with the traditional synthetic aperture imaging algorithms on UCSD light field datasets and our own datasets captured in indoor and outdoor environment, and extensive experimental results demonstrate the effectiveness and superiority of our approach.

Highlights

  • Occluded object imaging is a significantly challenging task in many computer vision application fields such as video surveillance and monitoring, hidden object detection and recognition, tracking through occlusion, etc

  • Computational photography is changing the traditional way of imaging, which captures additional visual information by using generalized optics

  • We synthetically focus the camera array by choosing a plane of focus, and adding up all the rays corresponding to each point on the chosen plane to get a pixel in a "synthetic aperture" image

Read more

Summary

Introduction

Occluded object imaging is a significantly challenging task in many computer vision application fields such as video surveillance and monitoring, hidden object detection and recognition, tracking through occlusion, etc. Rays from the blue point, which is not in the plane of focus, form a circle of confusion on the sensor plane, resulting in a blurred image (Figure 1b). A camera array is analogous to a "synthetic" lens aperture - each camera being a sample point on a virtual lens (Figure 1c). We synthetically focus the camera array by choosing a plane of focus, and adding up all the rays corresponding to each point on the chosen plane to get a pixel in a "synthetic aperture" image. By warping and integrating the multiple view images, synthetic aperture imaging can simulate a virtual camera with a large convex lens, and focus on different frontal-parallel or oblique planes with a narrow depth of field. As a result, occluded objects focused on the virtual focal plane are visible, while those that are not are blurry

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call