Inverse stochastic resonance (ISR) is a counterintuitive phenomenon where noise reduces the oscillation frequency of an oscillator to a minimum occurring at an intermediate noise intensity, and sometimes even to the complete absence of oscillations. In neuroscience, ISR was first experimentally verified with cerebellar Purkinje neurons [Buchin et al., PLOS Comput. Biol. 12, e1005000 (2016)]. These experiments showed that ISR enables a locally optimal information transfer between the input and output spike train of neurons. Subsequent studies have further demonstrated the efficiency of information processing and transfer in neural networks with small-world network topology. We have conducted a numerical investigation into the impact of adaptivity on ISR in a small-world network of noisy FitzHugh-Nagumo (FHN) neurons, operating in a bi-metastable regime consisting of a metastable fixed point and a metastable limit cycle. Our results show that the degree of ISR is highly dependent on the value of the FHN model's timescale separation parameter ε. The network structure undergoes dynamic adaptation via mechanisms of either spike-time-dependent plasticity (STDP) with potentiation-/depression-domination parameter P or homeostatic structural plasticity (HSP) with rewiring frequency F. We demonstrate that both STDP and HSP amplify the effect of ISR when ε lies within the bi-stability region of FHN neurons. Specifically, at larger values of ε within the bi-stability regime, higher rewiring frequencies F are observed to enhance ISR at intermediate (weak) synaptic noise intensities, while values of P consistent with depression-domination (potentiation-domination) consistently enhance (deteriorate) ISR. Moreover, although STDP and HSP control parameters may jointly enhance ISR, P has a greater impact on improving ISR compared to F. Our findings inform future ISR enhancement strategies in noisy artificial neural circuits, aiming to optimize local information transfer between input and output spike trains in neuromorphic systems and prompt venues for experiments in neural networks.
Read full abstract