Abstract

As a fundamental feature of intelligent vehicles, vision-based road detection must be executed on a real-time embedded platform with high accuracy. Road detection is often applied in conjunction with lane detection to determine the drivable regions. Although some existing research based on large deep learning models achieved high accuracy using the road detection dataset, they often did not consider the low power requirement of a typical embedded system. In this letter, an ultralow-power hardware accelerator for road detection is proposed. By adopting a top-down convolutional neural network (CNN) structure, a small CNN, namely RoadNet, is trained that can achieve near state-of-the-art detection accuracy. Furthermore, each CNN layer is trimmed to be computationally identical and every processing element in the architecture is fully utilized. When implemented using 32-nm process technology, the proposed hardware accelerator requires the chip area of 0.45 mm2 and the power consumption of only 80 mW, which results in an equivalent power efficiency of about 300 GOP/s/W. The RoadNet chip is capable of processing 241 frames/s at 1080P image resolution. It stands out as an ultralow-power hardware accelerator in an embedded system for road detection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call