Abstract

Although still not fully understood, sleep is known to play an important role in learning and in pruning synaptic connections. From the active inference perspective, this can be cast as learning parameters of a generative model and Bayesian model reduction, respectively. In this article, we show how to reduce dimensionality of the latent space of such a generative model, and hence model complexity, in deep active inference during training through a similar process. While deep active inference uses deep neural networks for state space construction, an issue remains in that the dimensionality of the latent space must be specified beforehand. We investigate two methods that are able to prune the latent space of deep active inference models. The first approach functions similar to sleep and performs model reduction post hoc. The second approach is a novel method which is more similar to reflection, operates during training and displays “aha” moments when the model is able to reduce latent space dimensionality. We show for two well-known simulated environments that model performance is retained in the first approach and only diminishes slightly in the second approach. We also show that reconstructions from a real world example are indistinguishable before and after reduction. We conclude that the most important difference constitutes a trade-off between training time and model performance in terms of accuracy and the ability to generalize, via minimization of model complexity.

Highlights

  • While the role of sleep in animals still contains a lot of mystery (Mignot, 2008; Joiner, 2016), it has been linked to many phenomena, such as restorative processes in the brain (Hobson, 2005) and memory processing (Born and Wilhelm, 2012; Potkin and Bunney, 2012; Stickgold and Walker, 2013)

  • We present an on-line method, which optimizes the number of latent dimensions as part of the training process

  • In response to the problem that the model must be retrained in off-line sleep, we present an on-line method for latent space pruning

Read more

Summary

Introduction

While the role of sleep in animals still contains a lot of mystery (Mignot, 2008; Joiner, 2016), it has been linked to many phenomena, such as restorative processes in the brain (Hobson, 2005) and memory processing (Born and Wilhelm, 2012; Potkin and Bunney, 2012; Stickgold and Walker, 2013). Recent work has indicated that the removal of redundant neural connections during sleep (Li et al, 2017) can be compared to minimization of complexity through elimination of redundant parameters during Bayesian model reduction (BMR) in Bayesian approaches to brain function (Hobson and Friston, 2012; Friston et al, 2017b, 2019). Removal of redundant connections while strengthening others should promote learning (Li et al, 2017). Artificial agents used for learning specific tasks are often based on the formalism of Markov decision processes (MDPs) (Watkins, 1989; Mnih et al, 2015; Hafner et al, 2019). In this formalism, the complexity of the environment determines the complexity of the latent space

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.