Abstract

New technologies for recording the activity of large neural populations during complex behavior provide exciting opportunities for investigating the neural computations that underlie perception, cognition, and decision-making. Non-linear state space models provide an interpretable signal processing framework by combining an intuitive dynamical system with a probabilistic observation model, which can provide insights into neural dynamics, neural computation, and development of neural prosthetics and treatment through feedback control. This brings with it the challenge of learning both latent neural state and the underlying dynamical system because neither are known for neural systems a priori. We developed a flexible online learning framework for latent non-linear state dynamics and filtered latent states. Using the stochastic gradient variational Bayes approach, our method jointly optimizes the parameters of the non-linear dynamical system, the observation model, and the black-box recognition model. Unlike previous approaches, our framework can incorporate non-trivial distributions of observation noise and has constant time and space complexity. These features make our approach amenable to real-time applications and the potential to automate analysis and experimental design in ways that testably track and modify behavior using stimuli designed to influence learning.

Highlights

  • Discovering interpretable structure from a streaming high-dimensional time series has many applications in science and engineering

  • Equation (7), assumes that the observation vector yt is sampled from a probability distribution P determined by the latent state xt though a linear-non-linear map possibly together with extra parameters at each time t, the observation model, recognition model, and dynamic model are updated by backpropagation

  • We demonstrate our method on a range of non-linear dynamical systems relevant to neuroscience

Read more

Summary

Introduction

Discovering interpretable structure from a streaming high-dimensional time series has many applications in science and engineering. Since the invention of the celebrated Kalman filter, state space models have been successful in providing a succinct (and a more interpretable) description of the underlying dynamics that explains the observed time series as trajectories in a low-dimensional state space. Where ǫt is intended to capture the unobserved (latent) perturbations of the state xt Such (spatially) continuous state space models are natural in many applications where the changes are slow and the underlying system follows physical laws and constraints (e.g., object tracking) or where learning the laws are of great interest (e.g., in neuroscience and robotics) (Roweis and Ghahramani, 2001; Mante et al, 2013; Sussillo and Barak, 2013; Frigola et al, 2014; Zhao and Park, 2017). Further interpretation of f can provide understanding as to how neural computation is implemented (Mante et al, 2013; Zhao and Park, 2016; Russo et al, 2018)

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call