Abstract

Full text Figures and data Side by side Abstract Editor's evaluation Introduction Results Discussion Materials and methods Appendix 1 Appendix 2 Appendix 3 Appendix 4 Appendix 5 Data availability References Decision letter Author response Article and author information Metrics Abstract According to the efficient coding hypothesis, sensory neurons are adapted to provide maximal information about the environment, given some biophysical constraints. In early visual areas, stimulus-induced modulations of neural activity (or tunings) are predominantly single-peaked. However, periodic tuning, as exhibited by grid cells, has been linked to a significant increase in decoding performance. Does this imply that the tuning curves in early visual areas are sub-optimal? We argue that the time scale at which neurons encode information is imperative to understand the advantages of single-peaked and periodic tuning curves, respectively. Here, we show that the possibility of catastrophic (large) errors creates a trade-off between decoding time and decoding ability. We investigate how decoding time and stimulus dimensionality affect the optimal shape of tuning curves for removing catastrophic errors. In particular, we focus on the spatial periods of the tuning curves for a class of circular tuning curves. We show an overall trend for minimal decoding time to increase with increasing Fisher information, implying a trade-off between accuracy and speed. This trade-off is reinforced whenever the stimulus dimensionality is high, or there is ongoing activity. Thus, given constraints on processing speed, we present normative arguments for the existence of the single-peaked tuning organization observed in early visual areas. Editor's evaluation This fundamental study provides important insight into coding strategies in sensory areas. The study was well done, and the analysis and simulations were highly convincing. This study should be of particular interest to anybody who cares about efficient coding. https://doi.org/10.7554/eLife.84531.sa0 Decision letter Reviews on Sciety eLife's review process Introduction One of the fundamental problems in systems neuroscience is understanding how sensory information can be represented in the spiking activity of an ensemble of neurons. The problem is exacerbated by the fact that individual neurons are highly noisy and variable in their responses, even to identical stimuli (Arieli et al., 1996). A common feature of early sensory representation is that the neocortical neurons in primary sensory areas change their average responses only to a small range of features of the sensory stimulus. For instance, some neurons in the primary visual cortex respond to moving bars oriented at specific angles (Hubel and Wiesel, 1962). This observation has led to the notion of tuning curves. Together, a collection of tuning curves provides a possible basis for a neural code. A considerable emphasis has been put on understanding how the structure of noise and correlations affect stimulus representation given a set of tuning curves (Shamir and Sompolinsky, 2004; Averbeck and Lee, 2006; Franke et al., 2016; Zylberberg et al., 2016; Moreno-Bote et al., 2014; Kohn et al., 2016). More recently, the issue of local and catastrophic errors, dating back to the work of Shannon (Shannon, 1949), has been raised in the context of neuroscience (e.g. Xie, 2002; Sreenivasan and Fiete, 2011). Intuitively, local errors are small estimation errors that depend on the trial-by-trial variability of the neural responses and the local shapes of the tuning curves surrounding the true stimulus condition (Figure 1a bottom plot, see s1). On the other hand, catastrophic errors are very large estimation errors that depend on the trial-by-trial variability and the global shape of the tuning curves (Figure 1a bottom plot, see s2). While a significant effort has been put into studying how stimulus tuning and different noise structures affect local errors, less is known about the interactions with catastrophic errors. For example, Fisher information is a common measure of the accuracy of a neural code (Brunel and Nadal, 1998; Abbott and Dayan, 1999; Guigon, 2003; Moreno-Bote et al., 2014; Benichoux et al., 2017). The Cramér-Rao bound states that a lower limit of the minimal mean squared error (MSE) for any unbiased estimator is given by the inverse of Fisher information (Lehmann and Casella, 1998). Thus, increasing Fisher information reduces the lower bound on MSE. However, because Fisher information can only capture local errors, the true MSE might be considerably larger in the presence of catastrophic errors (Xie, 2002; Kostal et al., 2015; Malerba et al., 2022), especially if the available decoding time is short (Bethge et al., 2002; Finkelstein et al., 2018). Figure 1 Download asset Open asset Illustrations of local and catastrophic errors. (a) Top: A two-neuron system encoding a single variable using single-peaked tuning curves (λ=1). Bottom: The tuning curves create a one-dimensional activity trajectory embedded in a two-dimensional neural activity space (black trajectory). Decoding the two stimulus conditions, s1 and s2, illustrates the two types of estimation errors that can occur due to trial-by-trial variability, local (s^1) and catastrophic (s^2). (b) Same as in (a) but for periodic tuning curves (λ=0.5). Notice that the stimulus conditions are intermingled and that the stimulus can not be determined from the firing rates. (c) Time evolution of the root mean squared error (RMSE) using maximum likelihood estimation (solid line) and the Cramér-Rao bound (dashed line) for a population of single-peaked tuning curves (N=600, w=0.3, average evoked firing rate fs⁢t⁢i⁢m¯=20⁢exp⁡(-1/w)⁢B0⁢(1/w) sp/s, and b=2 sp/s). For about 50 ms the RMSE is significantly larger than the predicted lower bound. (d) The empirical error distributions for the time point indicated in (c), where the RMSE strongly deviates from the predicted lower bound. Inset: A non-zero empirical error probability spans the entire stimulus domain. (e) Same as in (d) when the RMSE roughly converges to the Cramér-Rao bound. Notice the absence of large estimation errors. A curious observation is that the tuning curves in early visual areas predominately use single-peaked firing fields, whereas grid cells in the entorhinal cortex are known for their periodically distributed firing fields (Hafting et al., 2005). It has been shown that the multiple firing locations of grid cells increase the precision of the neural code compared to single-peaked tuning curves (Sreenivasan and Fiete, 2011; Mathis et al., 2012; Wei et al., 2015). This raises the question of why periodic firing fields are not a prominent organization of early visual processing too? The theoretical arguments in favor of periodic tuning curves have mostly focused on local errors under the assumption that catastrophic errors are negligible (Sreenivasan and Fiete, 2011). However, given the response variability, it takes a finite amount of time to accumulate a sufficient number of spikes to decode the stimulus. Given that fast processing speed is a common feature of visual processing (Thorpe et al., 1996; Fabre-Thorpe et al., 2001; Rolls and Tovee, 1994; Resulaj et al., 2018), it is crucial that each neural population in the processing chain can quickly produce a reliable stimulus-evoked signal. Therefore, the time required to produce signals without catastrophic errors will likely put fundamental constraints on any neural code, especially in early visual areas. Here, we contrast Fisher information with the minimal decoding time required to remove catastrophic errors (i.e. the time until Fisher information becomes a reasonable descriptor of the MSE). We base the results on the maximum likelihood estimator for uniformly distributed stimuli (i.e., the maximum a posteriori estimator) using populations of tuning curves with different numbers of peaks. We show that the minimal decoding time tends to increase with increasing Fisher information in the case of independent Poissonian noise to each neuron. This suggests a trade-off between the decoding accuracy of a neural population and the speed by which it can produce a reliable signal. Furthermore, we show that the difference in minimal decoding time grows with the number of jointly encoded stimulus features (stimulus dimensionality) and in the presence of ongoing (non-specific) activity. Thus, single-peaked tuning curves require shorter decoding times and are more robust to ongoing activity than periodic tuning curves. Finally, we illustrate the issue of large estimation errors and periodic tuning in simple spiking neural network model tracking either a step-like stimulus change or a continuously time-varying stimulus. Results Shapes of tuning curves, Fisher information, and catastrophic errors To enable a comparison between single-peaked and periodic (multi-peaked) tuning curves, we consider circular tuning curves responding to a D-dimensional stimulus, s, according to (1) fi(s)=ai∏j=1Dexp⁡(1w(cos⁡(2πλi(sj−si,j′))−1))+b where ai is the peak amplitude of the stimulus-related tuning curve i, w is a width scaling parameter, λi defines the spatial period of the tuning curve, si,j′ determines the location of the peak(s) in the j:th stimulus dimension, and b determines the amount of ongoing activity (see Figure 1a–b, top panels). The parameters are kept fixed for each neuron, thus ignoring any effect of learning or plasticity. In the following, the stimulus domain is set to s∈[0,1)D for simplicity. To avoid boundary effects, we assume that the stimulus has periodic boundaries (i.e. sj=0 and sj=1 are the same stimulus condition) and adjust any decoded value to lie within the stimulus domain, for example, (2) s^ML=1+0.1(mod1)=0.1, see Materials and methods - ’Implementation of maximum likelihood estimator’ for details. We assume that the stimulus is uniformly distributed across its domain and that its dimensions are independent. This can be seen as a worst-case scenario as it maximizes the entropy of the stimulus. In a single trial, we assume that the number of emitted spikes for each neuron is conditionally independent, and follows a Poisson distribution, given some stimulus-dependent rate fi⁢(s). Thus, the probability of observing a particular activity pattern, r, in a population of N neurons given the stimulus-dependent rates and decoding time, T, is (3) p(r|s,T)=∏i=1Np(ri|Tfi(s))=∏i=1N(Tfi(s))riexp⁡(−Tfi(s))ri!. Given a model of neural responses, the Cramér-Rao bound provides a lower bound on the accuracy by which the population can communicate a signal as the inverse of the Fisher information. For sufficiently large populations, using the population and spike count models in Equation 1 and Equation 3, Fisher information is given by (for ai=a and b=0 for all neurons, see Sreenivasan and Fiete, 2011 or Appendix 2 - 'Fisher information and the Cramér-Rao bound' for details) (4) J≈(2π)2aTNwB0(1/w)D−1B1(1/w)exp⁡(−D/w)λ−2¯ where λ−2¯ denotes the sample average of the squared inverse of the (relative) spatial periods across the population, and Bα⁢(⋅) denotes the modified Bessel functions of the first kind. Equation 4 (and similar expressions) suggests that populations consisting of periodic tuning curves, for which λ−2¯≫1, are superior at communicating a stimulus signal than a population using tuning curves with only single peaks, where λ−2¯=1. However, (inverse) Fisher information only predicts the amount of local errors for an efficient estimator. Hence, the presence of catastrophic errors (Figure 1a, bottom) can be identified by large deviations from the predicted MSE for an asymptotically efficient estimator (Figure 1c–d). Therefore, we define minimal decoding time as the shortest time required to approach the Cramér-Rao bound (Figure 1c and e). Periodic tuning curves and stimulus ambiguity To understand why the amount of catastrophic error can differ with different spatial periods, consider first the problem of stimulus ambiguity that can arise with periodic tuning curves. If all tuning curves in the population share the same relative spatial period, λ, then the stimulus-evoked responses can only provide unambiguous information about the stimulus in the range [0,λ). Beyond this range, the response distributions are no longer unique. Thus, single-peaked tuning curves (λ=1) provide unambiguous information about the stimulus. Periodic tuning curves (λ<1), on the other hand, require the use of tuning curves with two or more distinct spatial periods to resolve the stimulus ambiguity (Fiete et al., 2008; Mathis et al., 2012; Wei et al., 2015). In the following, we assume the tuning curves are organized into discrete modules, where all tuning curves within a module share spatial period (Figure 1b) mimicking the organization of grid cells (Stensola et al., 2012). For convenience, assume that λ1>λ2>...>λL where L is the number of modules. Thus, the first module provides the most coarse-grained resolution of the stimulus interval, and each successive module provides an increasingly fine-grained resolution. It has been suggested that a geometric progression of spatial periods, such that λi=c⁢λi-1 for some spatial factor 0<c≤1, may be optimal for maximizing the resolution of the stimulus while reducing the required number of neurons (Mathis et al., 2012; Wei et al., 2015). However, trial-by-trial variability can still cause stimulus ambiguity and catastrophic errors - at least for short decoding times, as we show later, even when using multiple modules with different spatial periods. (Very) Short decoding times - when both Fisher information and MSE fails While it is known that Fisher information is not an accurate predictor of the MSE when the decoding time is short (Bethge et al., 2002), less has been discussed about the issue of MSE. Although MSE is often interpreted as a measure of accuracy, its insensitivity to rare outliers makes it a poor measure of reliability. Therefore, comparing MSE directly between populations can be a misleading measure of reliability if the distributions of errors are qualitatively different. If the amounts of local errors differ, lower MSE does not necessarily imply fewer catastrophic errors. This is exemplified in Figure 2, comparing a single-peaked and a periodic population encoding a two-dimensional stimulus using the suggested optimal scale factor, c≈1/1.44 (Wei et al., 2015). During the first ≈30 ms, the single-peaked population has the lowest MSE of the two populations despite having lower Fisher information (Figure 2a). Furthermore, comparing the error distribution after the periodic population achieves a lower MSE (the black circle in Figure 2a) shows that the periodic population still suffers from rare errors that span the entire stimulus range (Figure 2b–c, insets). As we will show, a comparison of MSE, as a measure of reliability, only becomes valid once catastrophic errors are removed. Here we assume that catastrophic errors should strongly affect the usability of a neural code. Therefore, we argue that the first criterion for any rate-based neural code should be to satisfy its constraint on decoding time to avoid catastrophic errors. Figure 2 Download asset Open asset (Very) Short decoding times when both Fisher information and MSE fails. (a) Time evolution of root mean squared error (RMSE), averaged across trials and stimulus dimensions, using maximum likelihood estimation (solid lines) for two populations (blue: λ1=1, c=1, red: λ1=1, c=1/1.44). Dashed lines indicate the lower bound predicted by Cramér-Rao. The black circle indicates the point where the periodic population has become optimal in terms of MSE. (b) The empirical distribution of errors for the time indicated by the black circle in (a). The single-peaked population (blue) has a wider distribution of errors centered around 0 compared to the periodic population (red), as suggested by having a higher MSE. Inset: Zooming in on rare error events reveals that while the periodic population has a narrower distribution of errors around 0, it also has occasional errors across large parts of the stimulus domain. (c) The empirical CDF of the errors for the same two populations as in (b). Inset: a zoomed-in version (last 1%) of the empirical CDF highlights the heavy-tailed distribution of errors for the periodic population. Parameters used in the simulations: stimulus dimensionality D=2, the number of modules L=5, number of neurons N=600, average evoked firing rate fs⁢t⁢i⁢m¯=20⁢exp⁡(-1/w)⁢B0⁢(1/w) sp/s, ongoing activity b=2 sp/s, and width parameter w=0.3. Note that the estimation errors for the two stimulus dimensions are pooled together. Minimal decoding times in populations with two modules How does the choice of spatial periods impact the required decoding time to remove catastrophic errors? To get some intuition, we first consider the case of populations encoding a one-dimensional stimulus using only two different spatial scales, λ1 and λ2. From the perspective of a probabilistic decoder (Seung and Sompolinsky, 1993; Deneve et al., 1999; Ma et al., 2006), assuming that the stimulus is uniformly distributed, the maximum likelihood (ML) estimator is Bayesian optimal (and asymptotically efficient). The maximum likelihood estimator aims at finding the stimulus condition which is the most likely cause of the observed activity, r, or (5) s^ML=argmaxs⁡p(r|s), where p⁢(r|s) is called the likelihood function. The likelihood function equals the probability of observing the observed neural activity, r, assuming that the stimulus condition was s. In the case of independent Poisson spike counts (or at least independence across modules), each module contributes to the joint likelihood function p⁢(r|s) with individual likelihood functions, Q1 and Q2 (Wei et al., 2015). Thus, the joint likelihood function can be seen as the product of the two individual likelihood functions, where each likelihood is λi-periodic (6) p(r|s)=Q1(r|s)Q2(r|s). In this sense, each module provides its own ML-estimate of the stimulus, sM⁢L(1)=arg⁢maxs⁡Q1⁢(r|s) and sM⁢L(2)=arg⁢maxs⁡Q2⁢(r|s). Because of the periodicity of the tuning curves, there can be multiple modes for each of the likelihoods (e.g. Figure 3a and b, top panels). For the largest mode of the joint likelihood function to also be centered close to the true stimulus condition, the distance δ between sM⁢L(1) and sM⁢L(2) must be smaller than between any other pair of modes of Q1 and Q2. Thus, to avoid catastrophic errors, δ must be smaller than some largest allowed distance δ* which guarantees this relation (see Equations 25–30 for calculation of δ* assuming the stimulus is in the middle of the domain). As δ varies from trial to trial, we limit the probability of the decoder experiencing catastrophic errors to some small error probability, pe⁢r⁢r⁢o⁢r, by imposing that (7) Pr(|δ|>δ∗)<perror. Assuming that the estimation of each module becomes efficient before the joint estimation, Equation 7 can be reinterpreted as a lower bound on the required decoding time before the estimation based on the joint likelihood function becomes efficient (8) Tth>2(erfinv(1−perror)δ∗)2(1J1,norm+1J2,norm), where erfinv⁢(⋅) is the inverse of the error function and Jk,n⁢o⁢r⁢m refers to the time-normalized Fisher information of module k (see Materials and methods for derivation). Thus, the spatial periods of the modules influence the minimal decoding time by determining: (1) the largest allowed distance δ* between the estimates of the modules and (2) the variances of the estimations given by the inverse of their respective Fisher information. Figure 3 with 2 supplements see all Download asset Open asset Catastophic errors and minimal decoding times in populations with two modules. (a) Top: Sampled individual likelihood functions of two modules with very different spatial periods. Bottom: The sampled joint likelihood function for the individual likelihood functions in the top panel. (b–c) Same as in (a) but for spatial periods that are similar but not identical and for a single-peaked population, respectively. (d) Bottom: The dependence of the scale factor c on the minimal decoding time for λ1=1. Blue circles indicate the simulated minimal decoding times, and the black line indicates the estimation of the minimal decoding times according to Equation 8, with pe⁢r⁢r⁢o⁢r=10-4. Top left: The predicted value of 1/δ*. Top right: The inverse of the Fisher information. (e) Same as (d) but for λ1=1/2. (f) RMSE (lines), the 99.8th percentile (filled circles), and the maximal error (open circles) of the error distribution for several choices of scale factor, c, and decoding time. The color code is the same as in panels (d-e). The parameters used in (d-f) are: population size N=600, number of modules L=2, scale factors c=0.05-1, width parameter w=0.3, average evoked firing rate fs⁢t⁢i⁢m¯=20⁢exp⁡(-1/w)⁢B0⁢(1/w) sp/s, ongoing activity b=0 sp/s, and threshold factor α=2. To give some intuition of the approximation, if the spatial periods of the modules are very different, λ2≪λ1, then there exist many peaks of Q2 around the peak of Q1 (Figure 3a). Additionally, there can be modes of Q1 and Q2 far away from the true stimulus close together. Thus, λ2≪λ1 can create a highly multi-modal joint likelihood function where small deviations in sM⁢L(1) and sM⁢L(2) can cause a shift, or a change, of the maximal mode of the joint likelihood. To avoid this, δ* must be small, leading to longer decoding times by Equation 8. Furthermore, suppose the two modules have similar spatial periods λ2∼λ1, or λ1 is close to a multiple of λ2. In that case, the distance between the peaks a few periods away is also small, again leading to longer decoding times (Figure 3b). In other words, periodic tuning suffers from the dilemma that small shifts in the individual stimulus estimates can cause catastrophic shifts in the joint likelihood function. Although these might be rare events, the possibility of such errors increases the probability of catastrophic errors. Thus, assuming λ1<1, both small and large scale factors c can lead to long decoding times. When λ1=1, however, only small-scale factors c pose such problems, at least unless the stimulus is close to the periodic edge (i.e. s≈0 or s≈1, see Figure 3—figure supplement 1). On the other hand, compared to single-peaked tuning curves, periodic tuning generally leads to sharper likelihood functions, increasing the accuracy of the estimates once catastrophic errors are removed (e.g., compare the widths of the joint likelihood functions in Figure 3a–c). To test the approximation in Equation 8, we simulated a set of populations (N=600 neurons) with different spatial periods. The populations were created using identical tuning parameters except for the spatial periods, whose distribution varied across the populations, and the amplitudes, which were adjusted to ensure an equal average firing rate (across all stimulus conditions) for all neurons (see Materials and methods for details on simulations). As described above, the spatial periods were related by a scale factor c. Different values of c were tested for the largest period being either λ1=1 or λ1=1/2. Furthermore, only populations with unambiguous codes over the stimulus interval were included (i.e. c≠1/2,1/3,1/4,… for λ1=1/2; Mathis et al., 2012). Note, however, that there is no restriction on the periodicity of the tuning curves to align with the periodicity of the stimulus (i.e. 1/λi does not need to be an integer). For each population, the minimal decoding time was found by gradually increasing the decoding time until the empirical MSE was lower than twice the predicted lower bound (i.e. α=2, see Equation 10 and Materials and methods for details). Limiting the probability of catastrophic errors to pe⁢r⁢r⁢o⁢r=10-4, Equation 8 is a good predictor of the minimal decoding time (Figure 3d–e, bottom panels, coefficient of determination R2≈0.92 and R2≈0.95 for λ1=1 and λ1=1/2, respectively). For both λ1=1 and λ1=1/2, the minimal decoding time increases overall with decreasing scale factor, c (see Figure 3d–e). However, especially for λ1=1/2, the trend is interrupted by large peaks (Figure 3e). For λ1=1, there are deviations from the predicted minimal decoding time for small scale factors, c. They occur whenever λ2 is slightly below a multiple of λ1=1, and get more pronounced when increasing the sensitivity to the threshold factor α=1.2 (see Figure 3—figure supplement 2). We believe one cause of these deviations is the additional shifts across the periodic boundary (as in Figure 3—figure supplement 1) that can occur when c is just below 1/2,1/3,1/4,…, etc. To confirm that the estimated minimal decoding times have some predictive power on the error distributions, we re-simulated a subset of the populations for various decoding times, T, using 15,000 randomly sampled stimulus conditions (Figure 3f). Both the RMSE and outlier errors (99.8th percentile and the maximal error, that is, 100th percentile) agree with the shape of minimal decoding times, suggesting that a single-peaked population is good at removing large errors at very short time scales. Minimal decoding times for populations with more than two modules From the two-module case above, it is clear that the choice of scale factor influences the minimal decoding time. However, Equation 8 is difficult to interpret and is only valid for two-module systems (L=2). To approximate how the minimal decoding time scales with the distribution of spatial periods in populations with more than two modules, we extended the approximation method first introduced by Xie (Xie, 2002). The method was originally used to assess the number of neurons required to reach the Cramér-Rao bound for single-peaked tuning curves with additive Gaussian noise for the ML estimator. In addition, it only considered encoding a one-dimensional stimulus variable. We adapted this method to approximate the required decoding time for stimuli with arbitrary dimensions, Poisson-distributed spike counts, and tuning curves with arbitrary spatial periods. In this setting, the scaling of minimum decoding time with the spatial periods, λ1,…,λL, can be approximated as (see Materials and methods for derivation) (9) Tth≫A(w)1aNexp⁡(D/w)B0(1/w)(D−1)λ−3¯2λ−2¯3≃A∗(w)Nfstim(D)¯λ−3¯2λ−2¯3, where λ−2¯ and λ−3¯ indicate the sample average across the inverse spatial periods (squared or cubed, respectively) in the population, fstim(D)¯ is the average evoked firing rate across the stimulus domain, and A(w) (or A∗(w)) is a function of w (see Materials and methods for detailed expression). The last approximation holds with equality whenever all tuning curves have an integer number of peaks. The derivation was carried out assuming the absence of ongoing activity and that the amplitudes within each population are similar, a1≈…≈aN. Importantly, the approximation also assumes the existence of a unique solution to the maximum likelihood equations. Therefore, it is ill-equipped to predict the issues of stimulus ambiguity. Thus, going back to the two-module cases, Equation 9 cannot capture the additional effects of λ2≪λ1 or when λ1 is close to a multiple of λ2, as in Figure 3d–e. On the other hand, complementing the theory presented in Equation 8, Equation 9 provides a more interpretable expression of the scaling of minimal decoding time. For c≤1, the minimal decoding time, Tt⁢h, is expected to increase with decreasing scale factor, c (see Equation 47). The scaling should also be similar for different choices of λ1. Furthermore, assuming all other parameters are constant, the minimal decoding time should grow roughly exponentially with the number of stimulus dimensions. To confirm the validity of Equation 9, we simulated populations of N=600 tuning curves across L=5 modules. Again, the spatial periods across the modules were related by a scale factor, c (Figure 4a). To avoid the effects of c≪1, we limited the range of the scale factor to 0.3≤c≤1. The upper bound on c was kept (for λ1=1) to include entirely single-peaked populations. Again, the assumption of homogeneous amplitudes in Equation 9 was dropped in simulations (Figure 4b, left column) to ensure that the average firing rate across the stimulus domain is equal for all neurons (see Figure 4b, right column, for the empirical average firing rates). This had little effect on Fisher information, where the theoretical prediction was based on the average amplitudes across all populations with the same λ1 and stimulus dimensionality D (see Figure 4c, inset). As before, Fisher information grows with decreasing scale factor, c, and with decreasing spatial period λ1. As expected, increasing the stimulus dimensionality decreases Fisher information if all other parameters are kept constant. On the other hand, the minimal decoding time increases with decreasing spatial periods and increases with stimulus dimensionality (Figure 4c). The increase in decoding time between D=1 and D=2 is also very well predicted by Equation 9, at least for c>0.5 (Figure 4—figure supplement 1a). In these simulations, the choice of width parameter is compatible with experimental data (Ringach et al., 2002), but similar trends were found for a range of different width parameters (although the differences become smaller for small w, see Figure 4—figure supplement 1b–d). Figure 4 with 6 supplements see all Download asset Open asset Minimal decoding times for populations with five modules. (a) Illustration of the likelihood functions of a population with L=5 modules using scale factor c=0.7. (b) The peak stimulus-evoked amplitudes of each neuron (left c

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call