Abstract

The emerging edge computing applications often use high definition cameras as edge devices to capture video streams that need to be analyzed in real-time for situational understanding and answering queries. However, such devices suffer from limited energy (and hence limited computing power) and limited bandwidth available to stream the data to the edge controllers that provide much higher computing capacities. In this paper, we address these issues in the context of vehicular traffic monitoring and develop a scheme that has two components: YLLO and BATS. YLLO is a lightweight object recognition algorithm that runs on the edge device itself and substantially reduces the frame rate sent to the edge controller without removing the important information. BATS adapts the transmissions to the available bandwidth by taking advantage of further redundancy in the video stream in both single and multi-camera scenarios. We show that these mechanisms together can maintain object identification accuracy of above 95 percent, while transmitting just <inline-formula><tex-math notation="LaTeX">$\sim$</tex-math></inline-formula> 5–10 percent of all the frames recorded by the cameras.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call