Abstract

3D shape information is one of the very important clues in image processing and computer vision. Unlike traditional multi-input depth from defocus (DFD) technique, monocular DFD (MDFD) algorithm proposed by Hu and Haan can reconstruct 3D shape only from a single monocular defocus image with low computing complexity. In this paper, we present a real-time MDFD system implemented on the FPGA device. In order to reduce the FPGA design cost, vivado high level synthesis (VHLS) is applied to design the MDFD system. The system architecture on the basis of FIFO based convolution is first designed through C/C++ code that is further converted to the FPGA design by VHLS. Then the PIPELINE, LOOP_MERGE, and ARRAY_PARTITION directives are used to optimize the latency and interval of the proposed system. The performance and resource utilization of the whole system are evaluated by processing defocus images from the real scene with 640×480 pixel size. The system can process about 22 images at 20 MHz working frequency and keep the 93.29% depth accuracy on the 3D objects test, which achieves a real-time state-of-the-art MDFD system by comparing to other recent works.

Highlights

  • Image 3D information extraction is an important part of computer vision systems

  • This paper systematically demonstrates the design of real-time monocular DFD (MDFD) system on Field Programmable Gate Array (FPGA) through Vivado HLS

  • MONOCULAR DEPTH FROM DEFOCUS This paper implements MDFD technique proposed by Hu and Haan [8] to reconstruct 3D shape from a defocus image

Read more

Summary

INTRODUCTION

Image 3D information extraction is an important part of computer vision systems. In the increasingly mature artificial intelligence system, the 2D information of the scene can no longer meet the needs of researchers, especially in the related fields of robotic arm control [1], SLAM navigation technology [2] and other fields [3] in need of three-dimensional information. In order to rebuild the 3D shape from 2D images, Professor Pentland first proposed Depth from Defocus algorithm to extract 3D information in the 1980s [4], which infers depth information by measuring the degree of defocus existing in the image He presented that his method provided 64×64 3D maps at the speed of 8 frames per second (fps). Our contributions of the proposed MDFD system are, as follows: 1) To achieve a real-time monocular 3D reconstruction system at the speed of 22 fps on FPGA platform while keeping 93.29% depth accuracy on the real 3D objects test. 3) To show the great adaptation of the 3D reconstruction from the real scene, which prove the engineering value of the proposed system

MONOCULAR DEPTH FROM DEFOCUS
Findings
VIII. CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.