Abstract

Artificial intelligence has been adopted to facilitate monitoring, operation, and decision in the logistics field. Logistics robots with environment perception capability have been used to improve warehousing efficiency in logistics systems. However, autonomous mobile robots face computationally intensive and real-time demanding tasks such as navigation, localization, and obstacle avoidance. In this article, we present EventTube, an edge computing based event-aware system that can efficiently discover events from the video data captured by RGB-Monoculars and collaborate with individual devices to make timely decisions. EventTube deploys a semantic context extraction pipeline on edge servers to aggregate video streams from mobile robots and feed a few keyframes, including the start and end of the specific events to the successive perception pods, accelerating logistics robots’ response speed. The event-related model parameters are trained and updated online on a server. The video data collected at the warehouse site for our mobile robots show that EventTube significantly improves parcel delivery efficiency without affecting regular deliveries.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call