Abstract

A smart home system is realized by implementing various services. However, the design and deployment of smart home services are challenging due to their complexity and the large number of connected objects. Existing approaches to the smart home system to create services either require complex input from the inhabitant or can only work if the inhabitant specifies regulation solutions rather than targets. In addition, smart home services may conflict if they access the same actuators. Learning methods to dynamically generate smart home services are promising ways to solve the above problems. In this paper, depending on the ability to consider the composition of services and their mutual influence, we propose several reinforcement learning-based architectures for a smart home system to dynamically generate services. The expected advantages are, first, that the smart home services can propose the states of the actuators by considering the target values of the controllable environment states given by the inhabitant or by interacting with the inhabitant in a simple and natural way; and second, that there is no conflict between these propositions. We compare the performance of the proposed architectures using several simulated smart home environments with different services and select the architectures with the best performance concerning our predefined metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call