Abstract

The real world is essentially an indefinite environment in which the probability space, i. e., what can happen, cannot be specified in advance. Conventional reinforcement learning models that learn under uncertain conditions are given the state space as prior knowledge. Here, we developed a reinforcement learning model with a dynamic state space and tested it on a two-target search task previously used for monkeys. In the task, two out of four neighboring spots were alternately correct, and the valid pair was switched after consecutive correct trials in the exploitation phase. The agent was required to find a new pair during the exploration phase, but it could not obtain the maximum reward by referring only to the single previous one trial; it needed to select an action based on the two previous trials. To adapt to this task structure without prior knowledge, the model expanded its state space so that it referred to more than one trial as the previous state, based on two explicit criteria for appropriateness of state expansion: experience saturation and decision uniqueness of action selection. The model not only performed comparably to the ideal model given prior knowledge of the task structure, but also performed well on a task that was not envisioned when the models were developed. Moreover, it learned how to search rationally without falling into the exploration–exploitation trade-off. For constructing a learning model that can adapt to an indefinite environment, the method of expanding the state space based on experience saturation and decision uniqueness of action selection used by our model is promising.

Highlights

  • The fixed 8by8-state model is the ideal learner for the two-target search task, i.e., with 8×8 = 64 states, so it quickly learned the current pair and obtained a high correct response rate, with the upper limit close to the theoretical value

  • In the two-target search task, even if the correct answer is obtained by looking at one target, two targets might be correct in the trial

  • We developed a reinforcement learning model with a dynamic state space and tested its ability to execute a two-target search task in which the exploration and exploitation phases alternated

Read more

Summary

Introduction

The other is the case where even the probability or state space of Dynamic State Reinforcement Learning Model the environment is neither given nor hypothesized in advance. An environment with the latter type of uncertainty is defined as an indefinite environment, and adaptation to such an everchanging indefinite environment is a critical issue for living systems (Shimizu, 1993). Even in the abovementioned advanced POMDP models, possible environmental states are generated within a given probability or feature space (Figures 1A,B) These architectures may not achieve high learning performance in any unknown environment

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call