Abstract

Homeostasis is a self-regulatory process, wherein an organism maintains a specific internal physiological state. Homeostatic reinforcement learning (RL) is a framework recently proposed in computational neuroscience to explain animal behavior. Homeostatic RL organizes the behaviors of autonomous embodied agents according to the demands of the internal dynamics of their bodies, coupled with the external environment. Thus, it provides a basis for real-world autonomous agents, such as robots, to continually acquire and learn integrated behaviors for survival. However, prior studies have generally explored problems pertaining to limited size, as the agent must handle observations of such coupled dynamics. To overcome this restriction, we developed an advanced method to realize scaled-up homeostatic RL using deep RL. Furthermore, several rewards for homeostasis have been proposed in the literature. We identified that the reward definition that uses the difference in drive function yields the best results. We created two benchmark environments for homeostasis and performed a behavioral analysis. The analysis showed that the trained agents in each environment changed their behavior based on their internal physiological states. Finally, we extended our method to address vision using deep convolutional neural networks. The analysis of a trained agent revealed that it has visual saliency rooted in the survival environment and internal representations resulting from multimodal input.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call