Abstract

The market for domestic robots—made to perform household chore, is growing as these robots relieve people of everyday responsibilities. Domestic robots are generally welcomed for their role in easing human labour, in contrast to industrial robots, which are frequently criticised for displacing human workers. But before these robots can carry out domestic chores, they need to become proficient in a number of minor activities, such as recognizing their surroundings, making decisions, and picking up on human behaviours. Reinforcement learning, or RL, has emerged as a key robotics technology that enables robots to interact with their environment and learn how to optimize their actions in order to maximize rewards. However, the goal of Deep Reinforcement Learning (DeepRL) is to address more complicated, continuous action-state spaces in real-world settings by combining RL with Neural Networks (NNs). The efficacy of DeepRL can be further augmented through interactive feedback, in which a trainer offers real-time guidance to expedite the robot's learning process. Nevertheless, the current methods have drawbacks, namely the transient application of guidance that results in repeated learning under identical conditions. Therefore, we present a novel method to preserve and reuse information and advice via Deep Interactive Reinforcement Learning (DeepIRL)—it utilizes a persistent rule-based system. This method not only expedites the training process but also lessens the number of repetitions that instructors will have to carry out. This study has the potential to advance the development of household robots and improve their effectiveness and efficiency as learners.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call