Abstract
Synthetically creating motion blur in two-dimensional (2D) images is a well-understood process and has been used in image processing for developing deblurring systems. There are no well-established techniques for synthetically generating arbitrary motion blur within three-dimensional (3D) images, such as depth maps and point clouds since their behavior is not as well understood. As a prerequisite, we have previously developed a method for generating synthetic motion blur in a plane that is parallel to the sensor detector plane. In this work, as a major extension, we generalize our previously developed framework for synthetically generating linear and radial motion blur along planes that are at arbitrary angles with respect to the sensor detector plane. Our framework accurately captures the behavior of the real motion blur that is encountered using a Time-of-Flight (ToF) sensor. This work uses a probabilistic model that predicts the location of invalid pixels that are typically present within depth maps that contain real motion blur. More specifically, the probabilistic model considers different angles of motion paths and the velocity of an object with respect to the image plane of a ToF sensor. Extensive experimental results are shown that demonstrate how our framework can be applied to synthetically create radial, linear, and combined radial-linear motion blur. We quantify the accuracy of the synthetic generation method by comparing the resulting synthetic depth map to the experimentally captured depth map with motion. Our results indicate that our framework achieves an average Boundary F1 (BF) score of 0.7192 for invalid pixels for synthetic radial motion blur, an average BF score of 0.8778 for synthetic linear motion blur, and an average BF score of 0.62 for synthetic combined radial-linear motion blur.
Highlights
The ability to synthetically create motion blur in 2D and depth images is useful for a wide range of applications
This work contributes to the state of the art by (1) developing a framework for synthetically generating motion blur in depth maps that mimics the appearance and behavior of real motion blur that can be observed using a ToF sensor; (2) developing the probabilistic model to predict the locations of invalid pixels to synthetically generate combined radiallinear motion blur; and (3) conducting extensive experiments to verify the performance of our framework for generating synthetic motion blur in depth maps
Motion blur appears in depth maps as an increase in the number of zero‐value pixels blur appears in depth mapsthat as an increase in the number of zero-valuewithin pixels a
Summary
The ability to synthetically create motion blur in 2D and depth images is useful for a wide range of applications. Motion blur in depth maps has been observed as a baseline for comparing different 3D sensing technologies or for evaluating the performance of various deblurring algorithms [19,35] These works typically focus on how to minimize the effects of motion blur and they do not provide any insight about how to synthetically create motion blur. This work presents a framework for synthetically generating motion blur within a depth map that mimics the real motion blur that is observed in ToF sensor data. This work contributes to the state of the art by (1) developing a framework for synthetically generating motion blur in depth maps that mimics the appearance and behavior of real motion blur that can be observed using a ToF sensor; (2) developing the probabilistic model to predict the locations of invalid pixels to synthetically generate combined radiallinear motion blur; and (3) conducting extensive experiments to verify the performance of our framework for generating synthetic motion blur in depth maps.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.