Abstract

Advances in communication technologies and computational capabilities of Internet of Things (IoT) devices enable a range of complex applications that require ever increasing processing of sensors' data. An illustrative example is real-time video surveillance that captures videos of target scenes and process them to detect anomalies using deep learning (DL). Running deep learning models requires huge processing and incurs high computation delay and energy consumption on resource-constraint IoT devices. In this article, we introduce methods for distributed inference over IoT devices and edge server. Two distinct algorithms are proposed to split the deep neural network layers computation between IoT device and an edge server; the early split strategy (ESS) for battery powered IoT devices and the late split strategy (LSS) for IoT devices connected to regular power source. The evaluation shows that both the ESS and LSS schemes achieve the target inference delay deadline when tested over VGG16 and MobileNet_V2 CNN models. In terms of computational load, the ESS scheme achieves nearly 15–20% reduction whereas LSS scheme achieves up to 60% reduction. The gains in energy saving of IoT devices for both the ESS and LSS schemes are nearly 18% and 52%, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call