Abstract

Low latency in the detection of objects in images is a fundamental aspect to provide an immediate response in different application scenarios, and to carry out the relevant actions. Deep learning-based models offer better results than traditional object detection techniques. As a result of this increased accuracy, the computational cost of these models is often very high. For this reason, several edge computing devices have emerged that perform inference quickly and efficiently due to the incorporation of hardware accelerators. In this work, different devices are evaluated using object detection algorithms based on deep learning. For this purpose, YOLOv3, YOLOv5 and YOLOX, with all their variants, have been used on an NVIDIA Jetson Nano, an NVIDIA Jetson AGX Xavier and a Google Coral Dev Board. For the evaluation of the models and devices, one of the most widespread datasets, MS COCO, is used. In addition, they have been evaluated using twenty different input sizes and three frameworks (Pytorch, TensorRT and Tensorflow Lite). From the data obtained, data can be extrapolated to other models such as YOLOv8. Additionally, the FPS/Power Consumption and FPS/Cost ratios are analyzed, as well as their feasibility in a real use scenario. As a result of this work, valuable recommendations are provided for projects where this technology is to be applied.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call