Abstract

In this paper, we describe a compact system for real-time generation of three-dimensional motion fields. Our system consists of one FPGA, two cameras and one host processor. With our system, we can generate dense three-dimensional motion fields (640 /spl times/ 480 vectors in a standard size image) at video-rate from dense optical flow and dense depth map obtained by area-based matching. The performance can be improved up to 840 frames per second in small size (320 /spl times/ 240) images by configuring another circuit, though it requires more amounts of hardware resources. By changing search spaces for optical flow and depth map by reconfiguration, we can control the maximum motion speed which can be detected, and the minimum distance to moving objects in the image, under limited hardware resources.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call