Abstract

Applications such as autonomous navigation, robot vision, and autonomous flying require depth map information of a scene. Depth can be estimated by using a single moving camera (depth from motion). However, the traditional depth from motion algorithms have low processing speeds and high hardware requirements that limit the embedded capabilities. In this work, we propose a hardware architecture for depth from motion that consists of a flow/depth transformation and a new optical flow algorithm. Our optical flow formulation consists in an extension of the stereo matching problem. A pixel-parallel/window-parallel approach where a correlation function based on the sum of absolute difference (SAD) computes the optical flow is proposed. Further, in order to improve the SAD, the curl of the intensity gradient as a preprocessing step is proposed. Experimental results demonstrated that it is possible to reach higher accuracy (90% of accuracy) compared with previous Field Programmable Gate Array (FPGA)-based optical flow algorithms. For the depth estimation, our algorithm delivers dense maps with motion and depth information on all image pixels, with a processing speed up to 128 times faster than that of previous work, making it possible to achieve high performance in the context of embedded applications.

Highlights

  • Smart cameras are machine vision systems which, in addition to image capture circuitry, are capable of extracting application-specific information from captured images

  • When compared with previous work (Figure 14), our algorithm shows a high performance under real world scenarios

  • In order to solve these limitations, we have proposed a new depth estimation algorithm whose Field Programmable Gate Array (FPGA)

Read more

Summary

Introduction

Smart cameras are machine vision systems which, in addition to image capture circuitry, are capable of extracting application-specific information from captured images. For video surveillance, image processing algorithms implemented inside the camera fabric can detect and track pedestrians [1], but for a robotic application, computer vision algorithms could estimate the system’s egomotion [2]. In current embedded applications, image processing algorithms inside the smart camera’s fabric deliver an efficient on-board solution for motion detection [7], object detection/tracking [8,9], inspection and surveillance [10], human behavior recognition [11], etc. Computer vision algorithms can be frequently used by smart cameras since they are the basis of several applications (automatic inspection, controlling processes, detecting events, modeling objects or environments, navigation, and so on). Sensors 2019, 19, 53 in the context of a smart camera because depth is the basis to obtain useful scene abstractions, for example, 3D reconstructions of the world and camera egomotion

Depth Estimation from Monocular Sequences
Motivation and Scope
Related Work
FPGA Architectures for Optical Flow
Optical Flow Methods Based on Learning Techniques
The Proposed Algorithm
Frame Buffer
Optical Flow
Search Template
Depth Estimation
The FPGA Architecture
Circular Buffer
Curl Estimation
Result and Discussion
Performance for the Optical Flow Algorithm
Method
Findings
Performance for the Depth Estimation Step
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call