Abstract

The perception of the surrounding circumstances is an essential task for fully autonomous driving systems, but its high computational and network loads typically impede a single host machine from taking charge of the systems. Decentralized processing is a candidate to decrease such loads; however, it has not been clear that this approach fulfills the requirements of onboard systems, including low latency and low power consumption. Embedded oriented graphics processing units (GPUs) are attracting great interest because they provide massively parallel computation capacity with lower power consumption compared to traditional GPUs. This study explored the effects of decentralized processing on autonomous driving using embedded oriented GPUs as decentralized units. We implemented a prototype system that off-loaded image-based object detection tasks onto embedded oriented GPUs to clarify the effects of decentralized processing. The results of experimental evaluation demonstrated that decentralized processing and network quantization achieved approximately 27 ms delay between the feeding of an image and the arrival of detection results to the host as well as approximately 7 W power consumption on each GPU and network load degradation in orders of magnitude. Judging from these results, we concluded that decentralized processing could be a promising approach to decrease processing latency, network load, and power consumption toward the deployment of autonomous driving systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call