Abstract

In many perceptual tasks there is an observed linear relationship between the magnitude of the estimated variable and the error of the estimate; this is called Weber’s law. Applied to estimates of interval timing this general linear relationship is called the scalar timing law. Under a stronger variant of the law in addition to the standard deviation of the errors increasing linearly with time, the complete distribution of timing estimates scales linearly as well. We have recently developed a model of a plastic network that can represent time through learned temporal dynamics [1]. The temporal representations formed in our network resemble those described experimentally in V1 [2]. Here we train this network to encode various times from several hundred ms to slightly less than 2 sec. We decode this spiking model using a simple population decoding scheme. We show that the standard deviation of reported durations in such a spiking network, scales nearly linearly. We analyze how the slope and intercept of linear fits depend on system parameters such as number of neurons, and network noise levels. Although the error scales linearly with time, the distribution of the decoded times does not scale linearly with the time, and its shape changes. We examine the effect of including an additional source of noise in the decoding process. We show that if an additional source of noise exists in the decoding process, the response distribution can scale nearly linearly as well. We analyze how this additional noise source effect the slope and intercept of the linear fit, and what magnitude of noise is necessary for obtaining linear scaling of the distribution.

Highlights

  • In many perceptual tasks there is an observed linear relationship between the magnitude of the estimated variable and the error of the estimate; this is called Weber’s law

  • We have recently developed a model of a plastic network that can represent time through learned temporal dynamics [1]

  • We analyze how the slope and intercept of linear fits depend on system parameters such as number of neurons, and network noise levels

Read more

Summary

Introduction

In many perceptual tasks there is an observed linear relationship between the magnitude of the estimated variable and the error of the estimate; this is called Weber’s law. We have recently developed a model of a plastic network that can represent time through learned temporal dynamics [1]. The temporal representations formed in our network resemble those described experimentally in V1 [2].

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call