Abstract

Binocular vision and neural networks (CNNs) are widely seen in modern intelligent vision processing systems, such as robotics, autonomous vehicles, and AR gadgets. However, both the classic semiglobal matching (SGM) and deep CNNs entail substantial computing resource to reach the performance goal. Traditional embedded CPU/graphic processor unit (GPU) cannot simultaneously meet the processing speed and energy requirement, while the specialized circuits dedicated to SGM and CNN processing, respectively, will take considerable hardware and development costs. However, as the popularity of deep learning, neural processing units (NPUs) become prevalent in many embedded and edge devices, which possess high throughput computing power to deal with the matrix operations involved by neural networks. In this work, we attempt to take advantage of the neural processing architectures integrated in SoC chips to accelerate the SGM process, so that the hardware resources will be better utilized instead of investing more resources to create specialized SGM components. Thereby, this letter first deploys SGM on NPU by converting the incompatible operations into the neural-computing flow, and a configurable neural processing element is proposed to flexibly support various vector operation sequences. Then, a hybrid dataflow scheduler and the corresponding hardware modification are introduced to accelerate the cost processing, improving hardware utilization and on-chip memory footprint and access. Our solution runs at 45 fps for an image size of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$640\times 480$ </tex-math></inline-formula> , with 128 disparity levels. The speed-energy efficiency is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$52\times $ </tex-math></inline-formula> better than the GPU (Jetson TX1) solution with negligible additional hardware overhead and accuracy loss.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.