Abstract

Real-time human detecting and tracking is an important task in Advanced Driver Assistance System (ADAS) especialy in providing an information about situation in front of vehicle. Deep Convolutional Neural Networks (CNN) is one algorithm that is widely applied to classify and detect objects. CNN has shown an impressive performance. However, the high computation of Deep CNN makes the algorithm difficult to be applied to the real ADAS system. Since 2014, the One-stage Detector approach such as SSD and YOLO began to be applied on devices with low computation. In this experiment, we present a real-time system for the detection and the tracking of humans (pedestrians, cyclists, and riders) for the ADAS system implemented in Raspberry Pi 3 Model B Plus. The object detection approach in this study applies the SSD framework, and the tracking human movements approach is done by calculating the movement of midpoint coordinates from bounding box objects from two sequenced frames. The result shows the realtime human detection and tracking on Raspberry Pi 3 B devices with input frame with a height 300 and a width 300 runs at 0.8 FPS with 77.6 percent processor consumption and 70.3 percent memory. Therefore, the use of Raspberry Pi 3 B Plus for human detection and tracking in ADAS systems is not suitable for the vehicle speeds above 50 Km per hour when runs at 0.8 FPS. Then the tracking system based on the coordinate movement of the midpoint bounding box has a problem when there is a bounding box overlapping or slicing each other

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call