Abstract

Purpose The Reflective Middleware for Acoustic Management (ReM-AM), based on the Middleware for Cloud Learning Environments (AmICL), aims to improve the interaction between users and agents in a Smart Environment (SE) using acoustic services, in order to consider the unpredictable situations due to the sounds and vibrations. The middleware allows observing, analyzing, modifying and interacting in every state of a SE from the acoustics. This work details an extension of the ReM-AM using the ontology-driven architecture (ODA) paradigm for acoustic management. Design/methodology/approach This work details an extension of the ReM-AM using the ontology-driven architecture (ODA) paradigm for acoustic management. In this paper are defined the different domains of knowledge required for the management of the sounds in SEs, which are modeled using ontologies. Findings This work proposes an acoustics and sound ontology, a service-oriented architecture (SOA) ontology, and a data analytics and autonomic computing ontology, which work together. Finally, the paper presents three case studies in the context of smart workplace (SWP), ambient-assisted living (AAL) and Smart Cities (SC). Research limitations/implications Future works will be based on the development of algorithms for classification and analysis of sound events, to help with emotion recognition not only from speech but also from random and separate sound events. Also, other works will be about the definition of the implementation requirements, and the definition of the real context modeling requirements to develop a real prototype. Practical implications In the case studies is possible to observe the flexibility that the ReM-AM middleware based on the ODA paradigm has by being aware of different contexts and acquire information of each, using this information to adapt itself to the environment and improve it using the autonomic cycles. To achieve this, the middleware integrates the classes and relations in its ontologies naturally in the autonomic cycles. Originality/value The main contribution of this work is the description of the ontologies required for future works about acoustic management in SE, considering that what has been studied by other works is the utilization of ontologies for sound event recognition but not have been expanded like knowledge source in an SE middleware. Specifically, this paper presents the theoretical framework of this work composed of the AmICL middleware, ReM-AM middleware and the ODA paradigm.

Highlights

  • In artificial intelligence, the main process related to sound management is linked to voice recognition and voice commands

  • This work carries out an extension of the Reflective Middleware for Acoustic Management (ReM-AM) middleware based on the ontology-driven architecture (ODA) paradigm for acoustic management in Smart Environment (SE)

  • The acoustic management and the ReM-AM middleware deployment are integrated by ontologies using the computation-independent model (CIM) and platform-independent model (PIM) layers of the ODA paradigm

Read more

Summary

Introduction

The main process related to sound management is linked to voice recognition and voice commands.

Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.