Abstract

Rail transit is developing towards intelligence, requiring a lot of computing resources to perform deep learning tasks such as obstacle detection and defect detection. By offloading workloads to the edge, Edge computing (EC) can help decrease the burden on cloud nodes and solve the problems of high transmission delay and large traffic load of cloud computing architecture. Based on the existing YOLOv3 model, we propose a model segmentation method and different sub-model distribution strategies to save the computing resources of onboard equipment and realize the edge train collaboration inference. YOLOv3 was chosen as the detection model because compared with the Two-Stage model, it is able to balance the real-time performance and accuracy of inference. Both YOLOv4 and YOLOX are improved on the basis of YOLOv3. The performance of collaborative inference is better than that of complete model in onboard devices. In the architecture of edge computing, we adopted various sub-models and model segmentation methods for collaborative inference, and the experimental results proved the effectiveness of these methods. We provide 2 feasible model cutting solutions to unite object detection and edge computing in a distributed manner. One is serial inference, which occupies the equal computing capacity as the original model. Therefore, we can obtain the inference outcome with minimum resource cost. The other is parallel, which permits multiple edge nodes to perform competitive inference to minimize inference time. We have implemented a collaborative inference solution in actual tests, and found offered EC-based method could complete real-time object detection tasks with less computing resources of onboard devices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call