Abstract

Modern robotic control tasks are usually solved by applying reinforcement learning techniques. In this paper we show that deep active inference can be applied for creating agents for large and complex environments and show in OpenAI benchmark that deep active inference approach has comparable or better results than modern reinforcement learning algorithms. Active Inference is a framework based on the Free Energy Principle for action and planning in some environment by minimizing Variational Free Energy. The idea is that the agent wants to remain alive and reduce uncertainty, which means that it should avoid surprising or unpreferred states and observations. Active inference proposed as a unifying brain theory but it's implementations unable to handle complex environments. Deep Active Inference algorithm uses deep neural networks to approximate key densities to scale Active Inference to much larger and complex environments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.