Abstract

This paper starts from the observation that mobile and edge devices are powerful enough to execute Machine Learning (ML) application components, which in turn creates opportunities to keep privacy-sensitive data close to its source. Composing and deploying a distributed ML application is far from trivial because the optimal configuration depends on the application’s goals and execution context, both of which may change throughout its lifetime. Prior research on context-aware reconfigurations in ML based applications offer limited capabilities for dynamically migrating software components between mobile, edge and cloud devices. In this paper, we propose a context-aware middleware that enables automated optimizations of the application deployment in order to satisfy the application’s functional goals while the execution context changes in terms of available computation, memory and network resources. We use finite state machines to model the reconfiguration of the application based on contextual triggers and facilitate system design through the abstraction of system states. We illustrate the benefits of our approach with an image recognition application with well-defined performance goals that is deployed in a three-tier mobile-edge-cloud architecture.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.