Abstract

Network bandwidth and high latency are the main bottlenecks of cloud computing. To combat such scenarios, new paradigm Edge Computing is used. Edge computing shifts the computation of resources from centralized cloud closer to the devices which generates data. Edge Computing reduces the response time, latency and improves the battery life while maintaining data safety and privacy. The distributed architecture of edge computing makes resource management an important aspect of edge computing. In such a resource constrained environment edge devices should be capable of processing all types of request coming from IoT devices. With the advancement in Data science and Machine learning models in different domains, many intelligent services have emerged to provide better user experience. These deep learning models have frequent updates to adapt to new hardware and software requirements. These models also have some installation dependencies and requirement of cross platform compatibility for training and prediction. In home edge environment, these issues are more important due to resource constraint in terms of memory and computing power. Also, due to the availability of extensive list of deep learning models for different service, it is difficult to maintain an environment supporting such models across various devices in the smart home to support all services. To solve this issue in smart home environment, this paper proposes architecture, using containerization techniques to deploy and manage the deep learning models. This paper also explains about the steps to convert the existing model into containers. Minimal space requirement on the edge device, data privacy, low latency along with device independence for deep learning models are prime benefits of the architecture proposed. To test the performance of the architecture, deep learning model was containerized and compared with the actual model deployed in the same edge environment. The experimental results demonstrated the performance is almost similar of the containerized architecture in terms of model execution time and CPU load vs execution time. But it comes with ease of model deployment and cross platform model execution.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call