Abstract

This paper proposes a camera system designed for local dynamic map (LDM) generation, capable of simultaneously performing object detection, tracking, and 3D position estimation. This paper focuses on improving existing approaches to better suit our application, rather than proposing novel methods. We modified the detection head of YOLOv4 to enhance the detection performance for small objects and to predict fiducial points for 3D position estimation. The modified detector, compared to YOLOv4, shows an improvement of approximately 5% mAP on the Visdrone2019 dataset and around 3% mAP on our database. We also proposed a tracker based on DeepSORT. Unlike DeepSORT, which applies a feature extraction network for each detected object, the proposed tracker applies a feature extraction network once for the entire image. To increase the resolution of feature maps, the tracker integrates the feature aggregation network (FAN) structure into the DeepSORT network. The difference in multiple objects tracking accuracy (MOTA) between the proposed tracker and DeepSORT is minimal at 0.3%. However, the proposed tracker has a consistent computational load, regardless of the number of detected objects, because it extracts a feature map once for the entire image. This characteristic makes it suitable for embedded edge devices. The proposed methods have been implemented on a system on chip (SoC), Qualcomm QCS605, using network pruning and quantization. This enables the entire process to be executed at 10 Hz on this edge device.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call