Abstract

Context-aware middlewares support applications with context management. Current middlewares support both hardware and software sensors providing data in structured forms (e.g., temperature, wind, and smoke sensors). Nevertheless, recent advances in machine learning paved the way for acquiring context from information-rich, loosely structured data such as audio or video signals. This paper describes a framework (CAMeL) enriching context-aware middlewares with machine learning capabilities. The framework is focused on acquiring contextual information from sensors providing loosely structured data without the need for developers of implementing dedicated application code or making use of external libraries. Nevertheless the general goal of context-aware middlewares is to make applications more dynamic and adaptive, and the proposed framework itself can be programmed for dynamically selecting sensors and machine learning algorithms on a contextual basis. We show with experiments and case studies how the CAMeL framework can (i) promote code reuse and reduce the complexity of context-aware applications by natively supporting machine learning capabilities and (ii) self-adapt using the acquired context allowing improvements in classification accuracy while reducing energy consumption on mobile platforms.

Highlights

  • Introduction e rapid spread of UbiquitousComputing and the Internet of ings (IoT) technologies is generating a sharp increase in the availability of data somehow representing our living environments [1]. e increasing amount of available data, referred as contextual data, is leveraging the development of applications capable of adapting their behaviour according to the representation of the environment.In 2001, Dey defined context as any information that can be used to characterise the situation of an entity

  • In this paper we present a framework devoted to context acquisition explicitly supporting machine learning techniques (Source code can be downloaded at https://bitbucket.org/damiano_fontana/ awareness) and describe how it can be integrated with off-the-shelf context-aware middlewares. e framework is devoted to transforming data streams into structured data and has been developed to be integrated via a compact set of interfaces

  • Due to the limitations arising in both scenarios, we propose a modular, reconfigurable framework allowing machine learning to be fully integrated within existing middlewares (Figure 1(c))

Read more

Summary

Motivations

Connected objects such as smart phones or smart cameras equipped with increasing computational, connectivity, and sensing capabilities are rapidly being deployed around us. At is, applications increasingly need to collect data streams using available sensors and transfor them into structured information (i.e., context), enabling adaptation. Context-aware middleware still does not explicitly support the acquisition of contextual information from unstructured data streams To reach this goal, applications must rely either on third-party machine learning libraries/modules or on original code written from scratch. Data traverse the whole architecture by means of in-memory queues, enabling decoupling and many-to-many asynchronous communications On top of these three layers, there are middlewares providing context management (i.e., modelling, reasoning, and distribution) to applications. E dynamic selection of the components allows developers to define the most suitable modules to be used in each specific context It is not feasible, to deal with a large number of scenarios with a monolithic, static architecture. E following section details how internal reconfiguration can be programmed

Context-Based Reconfiguration
Implementation Insights
Case Studies
Findings
A Case Study for Experimental Evaluation
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.