Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Export
Sort by: Relevance
  • Open Access Icon
  • Research Article
  • 10.51628/001c.130789
WhACC: Whisker Automatic Contact Classifier with Expert Human-Level Performance
  • Feb 25, 2025
  • Neurons, Behavior, Data analysis, and Theory
  • Phillip Maire + 4 more

The rodent vibrissal system remains pivotal in advancing neuroscience research, particularly for studies of cortical plasticity, learning, decision-making, sensory encoding and sensorimotor integration. While this model system provides notable advantages for quantifying active tactile input, it is hindered by the labor-intensive process of curating touch events across millions of video frames. Even with the aid of automated tools like the Janelia Whisker Tracker, millisecond accurate touch curation often requires >3 hours of manual review / million video frames. We address this limitation by introducing Whisker Automatic Contact Classifier (WhACC), a python package designed to identify touch periods from high-speed videos of head-fixed behaving rodents with human-level performance. For our model design, we train ResNet50V2 on whisker images and extract features. Next, we engineer features to improve performance with an emphasis on temporal consistency. Finally, we select only the most important features and use them to train a LightGBM classifier. Classification accuracy is assessed against three expert human curators on over one million frames. WhACC shows pairwise touch classification agreement on 99.5% of video frames, equal to between-human agreement. Additionally, comparison between an expert curator and WhACC on a holdout dataset comprising nearly four million frames and 16 singleunit electrophysiology recordings shows negligible differences in neural characterization metrics. Finally, we offer an easy way to select and curate a subset of data to adaptively retrain WhACC. Including this retraining step, we reduce human hours required to curate a 100 million frame dataset from ~333 hours to ~6 hours.

  • Open Access Icon
  • Research Article
  • 10.51628/001c.129626
A rodent paradigm for studying perceptual decisions under asymmetric reward
  • Feb 10, 2025
  • Neurons, Behavior, Data analysis, and Theory
  • Xiaoyue Zhu + 1 more

Many real-life decisions involve both perceptual processes and weighing the consequences of different actions. However, the neural mechanisms underlying perceptual decisions have typically been examined separately from those underlying economic decisions. Here, we trained rats to make choices informed by both perceptual and value cues on a trial-by-trial basis. As in typical perceptual tasks, subjects were rewarded for correctly categorizing a tone relative to a learned threshold. To add an economic component, a light indicated, on each trial, whether correct responses to one side gave higher rewards than correct responses to the other side. As such, on trials with some perceptual uncertainty, it could be worthwhile to choose the unlikely option, if it had higher expected value. We found that, despite subjects sensitivity to the frequency of the cue and the reward sizes, their behavior was not optimal: subjects tended to shift their choices in a stimulus-independent way following light flashes. Moreover, subjects tended to under-shift, which could be interpreted as being over-confident in their perceptual beliefs or as being risk-averse.

  • Open Access Icon
  • Research Article
  • Cite Count Icon 1
  • 10.51628/001c.127807
How to optimize neuroscience data utilization and experiment design for advancing brain models of visual and linguistic cognition?
  • Jan 6, 2025
  • Neurons, Behavior, Data analysis, and Theory
  • Greta Tuckute + 10 more

In recent years, neuroscience has made significant progress in building large-scale artificial neural network (ANN) models of brain activity and behavior. However, there is no consensus on the most efficient ways to collect data and design experiments to develop the next generation of models. This article explores the controversial opinions that have emerged on this topic in the domain of vision and language. Specifically, we address two critical points. First, we weigh the pros and cons of using qualitative insights from empirical results versus raw experimental data to train models. Second, we consider model-free (intuition-based) versus model-based approaches for data collection, specifically experimental design and stimulus selection, for optimal model development. Finally, we consider the challenges of developing a synergistic approach to experimental design and model building, including encouraging data and model sharing and the implications of iterative additions to existing models. The goal of the paper is to discuss decision points and propose directions for both experimenters and model developers in the quest to understand the brain.

  • Open Access Icon
  • Research Article
  • 10.51628/001c.127770
A study of animal action segmentation algorithms across supervised, unsupervised, and semi-supervised learning paradigms.
  • Dec 21, 2024
  • Neurons, behavior, data analysis, and theory
  • Ari Blau + 6 more

Action segmentation of behavioral videos is the process of labeling each frame as belonging to one or more discrete classes, and is a crucial component of many studies that investigate animal behavior. A wide range of algorithms exist to automatically parse discrete animal behavior, encompassing supervised, unsupervised, and semi-supervised learning paradigms. These algorithms - which include tree-based models, deep neural networks, and graphical models - differ widely in their structure and assumptions on the data. Using four datasets spanning multiple species - fly, mouse, and human - we systematically study how the outputs of these various algorithms align with manually annotated behaviors of interest. Along the way, we introduce a semi-supervised action segmentation model that bridges the gap between supervised deep neural networks and unsupervised graphical models. We find that fully supervised temporal convolutional networks with the addition of temporal information in the observations perform the best on our supervised metrics across all datasets.

  • Open Access Icon
  • Research Article
  • 10.51628/001c.127771
An introduction to reinforcement learning for neuroscience
  • Dec 21, 2024
  • Neurons, Behavior, Data analysis, and Theory
  • Kristopher T Jensen

Reinforcement learning has a rich history in neuroscience, from early work on dopamine as a reward prediction error signal for temporal difference learning (Schultz et al., 1997) to recent work suggesting that dopamine could implement a form of ‘distributional reinforcement learning’ popularized in deep learning (Dabney et al., 2020). Throughout this literature, there has been a tight link between theoretical advances in reinforcement learning and neuroscientific experiments and findings. As a result, the theories describing our experimental data have become increasingly complex and difficult to navigate. In this review, we cover the basic theory underlying classical work in reinforcement learning and build up to an introductory overview of methods in modern deep reinforcement learning that have found applications in systems neuroscience. We start with an overview of the reinforcement learning problem and classical temporal difference algorithms, followed by a discussion of ‘model-free’ and ‘model-based’ reinforcement learning together with methods such as DYNA and successor representations that fall in between these two extremes. Throughout these sections, we highlight the close parallels between such machine learning methods and related work in both experimental and theoretical neuroscience. We then provide an introduction to deep reinforcement learning with examples of how these methods have been used to model different learning phenomena in systems neuroscience, such as meta-reinforcement learning (Wang et al., 2018) and distributional reinforcement learning (Dabney et al., 2020). Code that implements the methods discussed in this work and generates the figures is also provided.

  • Open Access Icon
  • Research Article
  • Cite Count Icon 5
  • 10.51628/001c.124867
A standardised open science framework for sharing and re-analysing neural data acquired to continuous stimuli
  • Oct 16, 2024
  • Neurons, Behavior, Data analysis, and Theory
  • Giovanni M Di Liberto + 11 more

Neurophysiology research has demonstrated that it is possible and valuable to investigate sensory processing in scenarios involving continuous sensory streams, such as speech and music. Over the past 10 years or so, novel analytic frameworks combined with the growing participation in data sharing has led to a surge of publicly available datasets involving continuous sensory experiments. However, open science efforts in this domain of research remain scattered, lacking a cohesive set of guidelines. This paper presents an end-to-end open science framework for the storage, analysis, sharing, and re-analysis of neural data recorded during continuous sensory experiments. We propose a data structure that builds on existing custom structures (Continuous-event Neural Data), providing a precise naming convention and data types, as well as providing a workflow for storing and loading data in the general-purpose BIDS structure. The framework has been designed to interface easily with existing toolboxes, such as EelBrain, NapLib, MNE, and the mTRF-Toolbox. We present guidelines by taking both the user view (how to rapidly re-analyse existing data) and the experimenter view (how to store, analyse, and share), making the process as straightforward and accessible. Additionally, we introduce a web-based data browser that enables the effortless replication of published results and data re-analysis.

  • Open Access Icon
  • Research Article
  • 10.51628/001c.123366
Efficient Deep Reinforcement Learning with Predictive Processing Proximal Policy Optimization
  • Sep 4, 2024
  • Neurons, Behavior, Data analysis, and Theory
  • Burcu Küçükoğlu + 5 more

Advances in reinforcement learning (RL) often rely on massive compute resources and remain notoriously sample inefficient. In contrast, the human brain is able to efficiently learn effective control strategies using limited resources. This raises the question whether insights from neuroscience can be used to improve current RL methods. Predictive processing is a popular theoretical framework which maintains that the human brain is actively seeking to minimize surprise. We show that recurrent neural networks which predict their own sensory states can be leveraged to minimise surprise, yielding substantial gains in cumulative reward. Specifically, we present the Predictive Processing Proximal Policy Optimization (P4O) agent; an actor-critic reinforcement learning agent that applies predictive processing to a recurrent variant of the PPO algorithm by integrating a world model in its hidden state. Even without hyperparameter tuning, P4O significantly outperforms a baseline recurrent variant of the PPO algorithm on multiple Atari games using a single GPU. It also outperforms other state-of-the-art agents given the same wall-clock time and exceeds human gamer performance on multiple games including Seaquest, which is a particularly challenging environment in the Atari domain. Altogether, our work underscores how insights from the field of neuroscience may support the development of more capable and efficient artificial agents.

  • Open Access Icon
  • PDF Download Icon
  • Research Article
  • Cite Count Icon 7
  • 10.51628/001c.94404
Artificial intelligence is algorithmic mimicry: why artificial “agents” are not (and won’t be) proper agents
  • Feb 27, 2024
  • Neurons, Behavior, Data analysis, and Theory
  • Johannes Jaeger

What is the prospect of developing artificial general intelligence (AGI)? I investigate this question by systematically comparing living and algorithmic systems, with a special focus on the notion of “agency.” There are three fundamental differences to consider: (1) Living systems are autopoietic, that is, self-manufacturing, and therefore able to set their own intrinsic goals, while algorithms exist in a computational environment with target functions that are both provided by an external agent. (2) Living systems are embodied in the sense that there is no separation between their symbolic and physical aspects, while algorithms run on computational architectures that maximally isolate software from hardware. (3) Living systems experience a large world, in which most problems are ill defined (and not all definable), while algorithms exist in a small world, in which all problems are well defined. These three differences imply that living and algorithmic systems have very different capabilities and limitations. In particular, it is extremely unlikely that true AGI (beyond mere mimicry) can be developed in the current algorithmic framework of AI research. Consequently, discussions about the proper development and deployment of algorithmic tools should be shaped around the dangers and opportunities of current narrow AI, not the extremely unlikely prospect of the emergence of true agency in artificial systems.

  • Open Access Icon
  • PDF Download Icon
  • Research Article
  • Cite Count Icon 4
  • 10.51628/001c.91252
Visuomotor feedback tuning in the absence of visual error information
  • Dec 15, 2023
  • Neurons, Behavior, Data analysis, and Theory
  • Sae Franklin + 1 more

Large increases in visuomotor feedback gains occur during initial adaptation to novel dynamics, which we propose are due to increased internal model uncertainty. That is, large errors indicate increased uncertainty in our prediction of the environment, increasing feedback gains and co-contraction as a coping mechanism. Our previous work showed distinct patterns of visuomotor feedback gains during abrupt or gradual adaptation to a force field, suggesting two complementary processes: reactive feedback gains increasing with internal model uncertainty and the gradual learning of predictive feedback gains tuned to the environment. Here we further investigate what drives these changes visuomotor feedback gains in learning, by separating the effects of internal model uncertainty from visual error signal through removal of visual error information. Removing visual error information suppresses the visuomotor feedback gains in all conditions, but the pattern of modulation throughout adaptation is unaffected. Moreover, we find increased muscle co-contraction in both abrupt and gradual adaptation protocols, demonstrating that visuomotor feedback responses are independent from the level of co-contraction. Our result suggests that visual feedback benefits motor adaptation tasks through higher visuomotor feedback gains, but when it is not available participants adapt at a similar rate through increased co-contraction. We have demonstrated a direct connection between learning and predictive visuomotor feedback gains, independent from visual error signals. This further supports our hypothesis that internal model uncertainty drives initial increases in feedback gains.

  • Open Access Icon
  • PDF Download Icon
  • Research Article
  • Cite Count Icon 3
  • 10.51628/001c.90831
Circumstantial evidence and explanatory models for synapses in large-scale spike recordings
  • Dec 2, 2023
  • Neurons, Behavior, Data analysis, and Theory
  • Ian H Stevenson

Whether, when, and how causal interactions between neurons can be meaningfully studied from observations of neural activity alone are vital questions in neural data analysis. Here we aim to better outline the concept of functional connectivity for the specific situation where systems neuroscientists aim to study synapses using spike train recordings. In some cases, cross-correlations between the spikes of two neurons are such that, although we may not be able to say that a relationship is causal without experimental manipulations, models based on synaptic connections provide precise explanations of the data. Additionally, there is often strong circumstantial evidence that pairs of neurons are monosynaptically connected. Here we illustrate how circumstantial evidence for or against synapses can be systematically assessed and show how models of synaptic effects can provide testable predictions for pair-wise spike statistics. We use case studies from large-scale multi-electrode spike recordings to illustrate key points and to demonstrate how modeling synaptic effects using large-scale spike recordings opens a wide range of data analytic questions.