Abstract

Most embedded vision systems are used to capture an image in low-light condition which results in an image with low visibility and quality. However, high-performance image enhancement algorithms take up lots of chip resources and are time consuming in embedded vision application scenarios with high real-time requirements. In this paper, a multi-core heterogeneous embedded vision system is designed based on ZYNQ, and it uses the optimized screen blending algorithm to complete the enhancement of low-light images. First, the inverse color coefficient is optimized to enhance the texture details of the image. Secondly, adaptive gamma correction is performed on the inverted image to maintain the trueness of the enhanced image color. Finally, through reasonable hardware and software co-design, the inverse color coefficient is calculated by hardware logic circuit to speed up the algorithm. Experimental results show that the low-light images collected and processed by this system have a certain improvement in the image quality than the original algorithm. The processing speed is nearly 5 times faster than without hardware acceleration, which meets the real-time requirements of embedded vision systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call