Abstract

Autonomous product service systems (PSS) provide services by autonomously controlling products to deliver value. This requires powerful intelligences embedded in products. The development of powerful intelligences can be carried out by means of reinforcement learning in simulations, in which products act as agents and independently learn optimal behaviors via trial-and-error procedures. These simulations can refer to different system levels (e.g. car as an agent in a city, microcontroller as an agent in a car) and can vary in terms of visualization (e.g. 3D, 2D, block diagrams, no visualization). However, they always involve an agent, that can choose from a set of possible actions that cause changes to the environment, which can be evaluated against an objective based on a reward function, that needs to be defined by engineers before. Within this process, a neural network gradually learns an optimal behavioral model through a reinforcement learning algorithm that makes use of the data on selected actions by the agent, outcomes and yielded reward. However, the models required for the agent, environment, artificial intelligence (e.g. deep neural networks), reinforcement learning algorithm and reward function are highly interdependent and their design can have a significant influence on the training result. When training results are poor, rapid adaptation of these models is thereby desirable to improve the training process, but difficult to implement due to the interdependencies between models. In this paper, we discuss the challenges of model building for reinforcement learning in simulation and propose a general approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.