Abstract
Vision tasks are complicated by the nonuniform apparent motion associated with dynamic cameras in complex 3D environments. We present a framework for light field cameras that simplifies dynamic-camera problems, allowing stationary-camera approaches to be applied. No depth estimation or scene modelling is required – apparent motion is disregarded by exploiting the scene geometry implicitly encoded by the light field. We demonstrate the strength of this framework by applying it to change detection from a moving camera, arriving at the surprising and useful result that change detection can be carried out with a closed-form solution. Its constant runtime, low computational requirements, predictable behaviour, and ease of parallel implementation in hardware including FPGA and GPU make this solution desirable in embedded application, e.g. robotics. We show qualitative and quantitative results for imagery captured using two generations of Lytro camera, with the proposed method generally outperforming both naive pixel-based methods and, for a commonly-occurring class of scene, state-of-the-art structure from motion methods. We quantify the tradeoffs between tolerance to camera motion and sensitivity to change, and the impact of coherent, widespread scene changes. Finally, we discuss generalization of the proposed framework beyond change detection, allowing classically still-camera-only methods to be applied in moving-camera scenarios.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.