Reinforcement learning (RL) addresses complex sequential decision-making problems through interactive trial-and-error and the handling of delayed rewards. However, reinforcement learning typically starts from scratch, necessitating extensive exploration, which results in low learning efficiency. In contrast, humans often leverage prior knowledge to learn. Inspired by this, this paper proposes a semantic knowledge-guided reinforcement learning method (KFDQN), which fully utilizes knowledge to influence reinforcement learning, thereby improving learning efficiency, training stability, and performance. In terms of knowledge representation, considering the strong fuzziness of semantic knowledge, a fuzzy system is constructed to represent this knowledge. In terms of knowledge integration, a knowledge-guided framework that integrates a hybrid action selection strategy (HYAS), a hybrid learning method (HYL), and knowledge updating is constructed in conjunction with the existing reinforcement learning framework. The HYAS integrates knowledge into action selection, reducing the randomness of traditional exploration methods. The HYL incorporates knowledge into the learning target, thereby reducing uncertainty in the learning objective. Knowledge updating ensures that new data is utilized to update knowledge, avoiding the negative impact of knowledge limitations on the learning process. The algorithm is validated through numerical tasks in OpenAI Gym and real-world mobile robot Goal Reach and obstacle avoidance tasks. The results confirm that the algorithm effectively combines knowledge and reinforcement learning, resulting in a 28.6% improvement in learning efficiency, a 19.56% enhancement in performance, and increased training stability.
Read full abstract